CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
19
90
DESCRIPTION
stringlengths
475
4.65k
TRANSCRIPTION
stringlengths
0
20.1k
SEGMENTS
stringlengths
2
30.8k
Two Minute Papers
https://www.youtube.com/watch?v=knIzDj1Ocoo
This Blind Robot Learned To Climb Any Terrain! 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/getting-started/reports/Debug-Compare-Reproduce-Machine-Learning-Models--VmlldzoyNzY5MDk?utm_source=karoly 📝 The paper "Learning Quadrupedal Locomotion over Challenging Terrain " is available here: https://leggedrobotics.github.io/rl-blindloco/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #robots
Dear Fellow Scholars, this is two minute papers with Dr. Karajor Naifahir. What you see here is one of my favorite kind of paper where we train a robot in a software simulation and then we try to bring it into the real world and see if it can still navigate there. The simulation is the textbook and the real world is the exam if you will. And it's a really hard one. So let's see if this one passes or fails. Now the real world is messy and full of obstacles so to even have a fighting chance in training a robot to deal with that, the simulation better has those properties too. So let's add some hills, steps and stairs and let's see if it can overcome these. Well, not at first as you see the agent is very clumsy and can barely walk through a simple terrain. But as time passes it grows to be a little more confident and with that the terrain also progressively becomes more and more difficult in order to maximize learning. That is a great life lesson right there. Now while we look at these terrain experiments you can start holding on to your papers because this robot knows absolutely nothing about the outside world. It has no traditional cameras, no radar, no lidar, no depth sensors, no no no, none of that. Only proprioceptive sensors are allowed which means that the only thing that the robot senses is its own internal state and that's it. Whoa! For instance, it knows about its orientation and twist of the base unit, a little joint information like positions and velocities and really not that much more, all of which is proprioceptive information. Yup, that's it. And bravo, it is doing quite well in the simulation. However, reality is never quite the same as the simulations, so I wonder if what was learned here can be used there. Let's see. Wow! Look at how well it traverses through this Rocky Mountain, stream, and not even this nightmare-y, slow-ed descent gives it too much trouble. It works even if it cannot get a proper foothold and it is sleeping all the time, and it also learned to engage in this adorable jumpy behavior when stuck in vegetation. And it learned all this by itself. Absolute witchcraft. When looking at this table, we now understand that it still has reasonable speeds through moss and why it is slower in vegetation than in mud. Really cool. If you have been holding onto your paper so far, now squeeze that paper because if all that wasn't hard enough, let's add an additional 10 kilogram or 22 pound payload and see if it can shoulder it. And let's be honest, this should not happen. Wow! Look at that! It can not only shoulder it, but it also adjusts its movement to the changes to its own internal physics. To accomplish this, it uses a concept that is called learning by cheating or the teacher student architecture when the agent that learned in the simulator will be the teacher. Now, after we escaped the simulator into the real world, we cannot lean on the simulator anymore to try to see what works, so then what do we do? Well, here the teacher takes the place of the simulator, it distills its knowledge down to train the student further. If both of them do their jobs really well, the student will successfully pass the exam with flying colors. As you see, it is exactly the case here. This is an outrageously good paper. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their reports to explain how your model works, show plots of how model versions improved, discuss bugs, and demonstrate progress towards milestones. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good, it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.22, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karajor Naifahir."}, {"start": 4.22, "end": 10.56, "text": " What you see here is one of my favorite kind of paper where we train a robot in a software simulation"}, {"start": 10.56, "end": 16.56, "text": " and then we try to bring it into the real world and see if it can still navigate there."}, {"start": 16.56, "end": 21.06, "text": " The simulation is the textbook and the real world is the exam if you will."}, {"start": 21.06, "end": 22.900000000000002, "text": " And it's a really hard one."}, {"start": 22.900000000000002, "end": 26.14, "text": " So let's see if this one passes or fails."}, {"start": 26.14, "end": 33.94, "text": " Now the real world is messy and full of obstacles so to even have a fighting chance in training a robot to deal with that,"}, {"start": 33.94, "end": 37.14, "text": " the simulation better has those properties too."}, {"start": 37.14, "end": 43.24, "text": " So let's add some hills, steps and stairs and let's see if it can overcome these."}, {"start": 43.24, "end": 51.540000000000006, "text": " Well, not at first as you see the agent is very clumsy and can barely walk through a simple terrain."}, {"start": 51.54, "end": 62.44, "text": " But as time passes it grows to be a little more confident and with that the terrain also progressively becomes more and more difficult in order to maximize learning."}, {"start": 62.44, "end": 65.44, "text": " That is a great life lesson right there."}, {"start": 65.44, "end": 74.74, "text": " Now while we look at these terrain experiments you can start holding on to your papers because this robot knows absolutely nothing about the outside world."}, {"start": 74.74, "end": 82.74, "text": " It has no traditional cameras, no radar, no lidar, no depth sensors, no no no, none of that."}, {"start": 82.74, "end": 91.74, "text": " Only proprioceptive sensors are allowed which means that the only thing that the robot senses is its own internal state and that's it."}, {"start": 91.74, "end": 92.74, "text": " Whoa!"}, {"start": 92.74, "end": 101.74, "text": " For instance, it knows about its orientation and twist of the base unit, a little joint information like positions and velocities"}, {"start": 101.74, "end": 106.74, "text": " and really not that much more, all of which is proprioceptive information."}, {"start": 106.74, "end": 108.74, "text": " Yup, that's it."}, {"start": 108.74, "end": 113.24, "text": " And bravo, it is doing quite well in the simulation."}, {"start": 113.24, "end": 121.24, "text": " However, reality is never quite the same as the simulations, so I wonder if what was learned here can be used there."}, {"start": 121.24, "end": 122.24, "text": " Let's see."}, {"start": 124.24, "end": 125.24, "text": " Wow!"}, {"start": 125.24, "end": 135.24, "text": " Look at how well it traverses through this Rocky Mountain, stream,"}, {"start": 135.24, "end": 142.74, "text": " and not even this nightmare-y, slow-ed descent gives it too much trouble."}, {"start": 142.74, "end": 154.74, "text": " It works even if it cannot get a proper foothold and it is sleeping all the time, and it also learned to engage in this adorable jumpy behavior when stuck in vegetation."}, {"start": 154.74, "end": 157.74, "text": " And it learned all this by itself."}, {"start": 157.74, "end": 159.74, "text": " Absolute witchcraft."}, {"start": 159.74, "end": 169.74, "text": " When looking at this table, we now understand that it still has reasonable speeds through moss and why it is slower in vegetation than in mud."}, {"start": 169.74, "end": 170.74, "text": " Really cool."}, {"start": 170.74, "end": 184.24, "text": " If you have been holding onto your paper so far, now squeeze that paper because if all that wasn't hard enough, let's add an additional 10 kilogram or 22 pound payload and see if it can shoulder it."}, {"start": 184.24, "end": 187.24, "text": " And let's be honest, this should not happen."}, {"start": 191.24, "end": 193.24, "text": " Wow! Look at that!"}, {"start": 193.24, "end": 200.24, "text": " It can not only shoulder it, but it also adjusts its movement to the changes to its own internal physics."}, {"start": 200.24, "end": 212.24, "text": " To accomplish this, it uses a concept that is called learning by cheating or the teacher student architecture when the agent that learned in the simulator will be the teacher."}, {"start": 212.24, "end": 223.24, "text": " Now, after we escaped the simulator into the real world, we cannot lean on the simulator anymore to try to see what works, so then what do we do?"}, {"start": 223.24, "end": 231.24, "text": " Well, here the teacher takes the place of the simulator, it distills its knowledge down to train the student further."}, {"start": 231.24, "end": 238.24, "text": " If both of them do their jobs really well, the student will successfully pass the exam with flying colors."}, {"start": 238.24, "end": 241.24, "text": " As you see, it is exactly the case here."}, {"start": 241.24, "end": 244.24, "text": " This is an outrageously good paper."}, {"start": 244.24, "end": 246.24, "text": " What a time to be alive!"}, {"start": 246.24, "end": 249.24, "text": " This episode has been supported by weights and biases."}, {"start": 249.24, "end": 261.24, "text": " In this post, they show you how to use their reports to explain how your model works, show plots of how model versions improved, discuss bugs, and demonstrate progress towards milestones."}, {"start": 261.24, "end": 281.24, "text": " If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good, it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects."}, {"start": 281.24, "end": 289.24, "text": " This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions."}, {"start": 289.24, "end": 298.24, "text": " Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today."}, {"start": 298.24, "end": 304.24, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you."}, {"start": 304.24, "end": 328.24, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=2pWK0arWAmU
Remember, This Meeting Never Happened! 🚶🚶‍♀️
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their report on this exact paper is available here: https://wandb.ai/wandb/retiming-video/reports/Retiming-Instances-in-a-Video--VmlldzozMzUwNTk 📝 The paper "Layered Neural Rendering for Retiming People in Video" is available here: https://retiming.github.io/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajjona Ifehir. Let's ask these humans to jump into the pool at the same time on 3, 2, 1, go. Then this happens. And if we slow things down, like we would slow down an action sequence in a movie, look, this is 1, and then 2, and then 3. Not even close to being in the same time. We see that this footage is completely unselfageable. And please note the smile at the end of the video I'll tell you in a moment why. Imagine that after a little training, the little chaps were able to jump into the pool at the same time. Congratulations! But wait a second, look, the smile is the same as in the previous footage. This is not a different video, this is the same video as before, but it has been re-timed to seem as if the jumps happened at the same time. Absolutely amazing! And how about this footage? They move in tandem. Totally in tandem, right? Except that this was the original footage, which is a complete mess. And now, with this technique, it could be re-timed as if this happened. Incredible work. So, what is this with surgery? This is a new learning-based technique that can pull off nice tricks like this, and it can deal with cases that would otherwise be extremely challenging to re-time by hand. To find out why, let's have a look at this footage. Mm-hmm, got it. And now, let's try to pretend that this meeting never happened. Now, first, to be able to do this, we have to be able to recognize the test subjects of the video. This method performs pose estimation. These are the skeletons that you see here. Nothing new here. This can be done with off-the-shelf components. However, that is not nearly enough to remove or re-time them. We need to do more. Why is that? Well, look, mirror reflections and shadows. If we would only be able to remove the person, these secondary effects would give the trick away in a second. To address this issue, one of the key contributions of this new AI-based technique is that it is able to find these mirror reflections, shadows, and even more, which can be defined as motions that correlate with the test subject or, in other words, things that move together with the humans. And if we have all of these puzzle pieces, we can use a newer render to remove people or even adjust the timing of these videos. And now, hold on to your papers because it is not at all limited to shadows and mirror reflections. Remember this example. Here, the algorithm recognizes that these people cause deformations in the trampolines and is able to line them up together. Note that we need some intense and variable time-warping to make this happen. And, as an additional bonus, the photo bombing human has been removed from the footage. The neural renderer used in this work is from the Pix2Pix paper from only three years ago, and now, with some ingenious improvements, it can be elevated to a whole new level. Huge congratulations to the authors for this incredible achievement. And all this can be done in an interactive app that is able to quickly preview the results for us. What a time to be alive! What you see here is a report of this exact paper we have talked about, which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories, and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajjona Ifehir."}, {"start": 4.4, "end": 12.0, "text": " Let's ask these humans to jump into the pool at the same time on 3, 2, 1, go."}, {"start": 12.0, "end": 14.0, "text": " Then this happens."}, {"start": 14.0, "end": 19.0, "text": " And if we slow things down, like we would slow down an action sequence in a movie,"}, {"start": 19.0, "end": 23.8, "text": " look, this is 1, and then 2, and then 3."}, {"start": 23.8, "end": 26.8, "text": " Not even close to being in the same time."}, {"start": 26.8, "end": 30.400000000000002, "text": " We see that this footage is completely unselfageable."}, {"start": 30.400000000000002, "end": 35.4, "text": " And please note the smile at the end of the video I'll tell you in a moment why."}, {"start": 35.4, "end": 42.4, "text": " Imagine that after a little training, the little chaps were able to jump into the pool at the same time."}, {"start": 42.4, "end": 44.2, "text": " Congratulations!"}, {"start": 44.2, "end": 50.400000000000006, "text": " But wait a second, look, the smile is the same as in the previous footage."}, {"start": 50.4, "end": 60.2, "text": " This is not a different video, this is the same video as before, but it has been re-timed to seem as if the jumps happened at the same time."}, {"start": 60.2, "end": 62.2, "text": " Absolutely amazing!"}, {"start": 62.2, "end": 64.4, "text": " And how about this footage?"}, {"start": 64.4, "end": 66.4, "text": " They move in tandem."}, {"start": 66.4, "end": 69.4, "text": " Totally in tandem, right?"}, {"start": 69.4, "end": 74.2, "text": " Except that this was the original footage, which is a complete mess."}, {"start": 74.2, "end": 79.6, "text": " And now, with this technique, it could be re-timed as if this happened."}, {"start": 79.6, "end": 81.19999999999999, "text": " Incredible work."}, {"start": 81.19999999999999, "end": 83.39999999999999, "text": " So, what is this with surgery?"}, {"start": 83.39999999999999, "end": 88.19999999999999, "text": " This is a new learning-based technique that can pull off nice tricks like this,"}, {"start": 88.19999999999999, "end": 94.19999999999999, "text": " and it can deal with cases that would otherwise be extremely challenging to re-time by hand."}, {"start": 94.19999999999999, "end": 97.19999999999999, "text": " To find out why, let's have a look at this footage."}, {"start": 97.19999999999999, "end": 99.0, "text": " Mm-hmm, got it."}, {"start": 99.0, "end": 103.39999999999999, "text": " And now, let's try to pretend that this meeting never happened."}, {"start": 103.39999999999999, "end": 109.39999999999999, "text": " Now, first, to be able to do this, we have to be able to recognize the test subjects of the video."}, {"start": 109.4, "end": 112.2, "text": " This method performs pose estimation."}, {"start": 112.2, "end": 115.0, "text": " These are the skeletons that you see here."}, {"start": 115.0, "end": 116.0, "text": " Nothing new here."}, {"start": 116.0, "end": 118.80000000000001, "text": " This can be done with off-the-shelf components."}, {"start": 118.80000000000001, "end": 123.0, "text": " However, that is not nearly enough to remove or re-time them."}, {"start": 123.0, "end": 124.60000000000001, "text": " We need to do more."}, {"start": 124.60000000000001, "end": 126.0, "text": " Why is that?"}, {"start": 126.0, "end": 130.20000000000002, "text": " Well, look, mirror reflections and shadows."}, {"start": 130.20000000000002, "end": 134.8, "text": " If we would only be able to remove the person, these secondary effects would give the trick"}, {"start": 134.8, "end": 136.70000000000002, "text": " away in a second."}, {"start": 136.7, "end": 142.2, "text": " To address this issue, one of the key contributions of this new AI-based technique is that it is able"}, {"start": 142.2, "end": 148.6, "text": " to find these mirror reflections, shadows, and even more, which can be defined as motions"}, {"start": 148.6, "end": 155.79999999999998, "text": " that correlate with the test subject or, in other words, things that move together with the humans."}, {"start": 155.79999999999998, "end": 161.1, "text": " And if we have all of these puzzle pieces, we can use a newer render to remove people"}, {"start": 161.1, "end": 166.29999999999998, "text": " or even adjust the timing of these videos."}, {"start": 166.3, "end": 171.60000000000002, "text": " And now, hold on to your papers because it is not at all limited to shadows and mirror"}, {"start": 171.60000000000002, "end": 172.60000000000002, "text": " reflections."}, {"start": 172.60000000000002, "end": 174.60000000000002, "text": " Remember this example."}, {"start": 174.60000000000002, "end": 180.0, "text": " Here, the algorithm recognizes that these people cause deformations in the trampolines"}, {"start": 180.0, "end": 182.8, "text": " and is able to line them up together."}, {"start": 182.8, "end": 189.20000000000002, "text": " Note that we need some intense and variable time-warping to make this happen."}, {"start": 189.20000000000002, "end": 194.4, "text": " And, as an additional bonus, the photo bombing human has been removed from the footage."}, {"start": 194.4, "end": 200.4, "text": " The neural renderer used in this work is from the Pix2Pix paper from only three years ago,"}, {"start": 200.4, "end": 205.8, "text": " and now, with some ingenious improvements, it can be elevated to a whole new level."}, {"start": 205.8, "end": 209.70000000000002, "text": " Huge congratulations to the authors for this incredible achievement."}, {"start": 209.70000000000002, "end": 215.9, "text": " And all this can be done in an interactive app that is able to quickly preview the results for us."}, {"start": 215.9, "end": 217.70000000000002, "text": " What a time to be alive!"}, {"start": 217.70000000000002, "end": 224.1, "text": " What you see here is a report of this exact paper we have talked about, which was made by weights and biases."}, {"start": 224.1, "end": 226.1, "text": " I put a link to it in the description."}, {"start": 226.1, "end": 227.29999999999998, "text": " Make sure to have a look."}, {"start": 227.29999999999998, "end": 230.6, "text": " I think it helps you understand this paper better."}, {"start": 230.6, "end": 236.1, "text": " During my PhD studies, I trained a ton of neural networks which were used in our experiments."}, {"start": 236.1, "end": 244.4, "text": " However, over time, there was just too much data in our repositories, and what I am looking for is not data, but insight."}, {"start": 244.4, "end": 249.4, "text": " And that's exactly how weights and biases helps you by organizing your experiments."}, {"start": 249.4, "end": 257.5, "text": " It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more."}, {"start": 257.5, "end": 264.6, "text": " And get this, weight and biases is free for all individuals, academics, and open source projects."}, {"start": 264.6, "end": 273.6, "text": " Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today."}, {"start": 273.6, "end": 279.6, "text": " Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you."}, {"start": 279.6, "end": 306.6, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=l_C3KFeI_l0
AI-Based Sky Replacement Is Here! 🌓
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their report on this paper is available here: https://wandb.ai/wandb/skyAR/reports/The-Sky-Is-In-Our-Grasp---VmlldzozMjY0NDI 📝 The paper "Castle in the Sky: Dynamic Sky Replacement and Harmonization in Videos" is available here: https://jiupinjia.github.io/skyar/ ☀️The mentioned free light transport course is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jean-Eiffahir, through the power of neural network-based learning algorithms, today it is possible to perform video-to-video translation. This means that, for instance, in-go's a daytime video and outcomes a nighttime version of the same footage. This previous method had some remarkable properties, for instance, look, you can even see the reflections of the night lights appearing on the car hood. This is reassuring news indeed, because the spark of some sort of intelligence was already there in these algorithms. And believe it or not, this is old, old news, because we have been able to do these for about three years now. Of course, this was not perfect, the results were missing a lot of details, the frames of the video were quite a bit apart, artifacts were abundant, and more. But this paper was a huge leap at the time, and today I wonder how far have we come in three years? What can we do now that we were not able to do three years ago? Let's have a look together. This input video is much more detailed, a lot more continuous, so I wonder what we could do with this. Well, hold on to your papers and... Woohoo! Look at that! One, we have changed the sky and put a spaceship in there. That is already amazing, but look! Two, the spaceship is not stationary, but moves in harmony with the other objects in the video. And three, since the sky has changed, the lighting situation has also changed, so the colors of the remainder of the image also have to change. Let's see the before and after on that. Yes, excellent! And we can do so much more with this, for instance, put a castle in the sky... Or, you know what? Let's think big. Make it an extra planet, please. Wow! Thank you! And, wait a minute. If it is able to recolor the image after changing its surroundings, how well does it do it? Can we change the background to a dynamic one? What about a thunderstorm? Oh, yeah! That's as dynamic as it gets, and a new method handles this case beautifully. And before we look under the hood to see how all this wizardry is done, let's list our expectations. One, we expect that it has to know what picks us to change to load a different sky model. Two, it should know how the image is changing and rotating over time. And three, some recoloring has to take place. Now let's have a look and see if we find the parts we are expecting. And, yes, it has a sky-mating network. This finds the parts of the image where the sky is. There is the motion estimator that computes the optical flow of the image, distracts the movement of the sky over time, and there is the recoloring module as well. So, there we go. This can do not only sky replacement, but detailed weather and lighting synthesis is also possible for these videos. What a time to be alive! Now, if you remember, first we looked at the results, listed our expectations, and then we looked at the architecture of this neural network. This is a technique that I try to teach to my students in my Light Transport simulation course at the Technical University of Vienna, and I think if you check it out, you will be surprised by how effective it is. Now, note that I was always teaching it to a handful of motivated students, and I believe that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone, that's what I want. So, the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full-light simulation program from scratch there, and learn about physics, the world around us, and more. And what is perhaps even more important is that I try to teach you a powerful way of understanding difficult mathematical concepts. Make sure to have a look. What you see here is a report of this exact paper we have talked about, which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wades and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that Wades and Biasis is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today. Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.42, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jean-Eiffahir,"}, {"start": 4.42, "end": 7.68, "text": " through the power of neural network-based learning algorithms,"}, {"start": 7.68, "end": 11.92, "text": " today it is possible to perform video-to-video translation."}, {"start": 11.92, "end": 15.96, "text": " This means that, for instance, in-go's a daytime video"}, {"start": 15.96, "end": 19.8, "text": " and outcomes a nighttime version of the same footage."}, {"start": 19.8, "end": 24.68, "text": " This previous method had some remarkable properties, for instance, look,"}, {"start": 24.68, "end": 29.400000000000002, "text": " you can even see the reflections of the night lights appearing on the car hood."}, {"start": 29.4, "end": 34.199999999999996, "text": " This is reassuring news indeed, because the spark of some sort of intelligence"}, {"start": 34.199999999999996, "end": 36.8, "text": " was already there in these algorithms."}, {"start": 36.8, "end": 39.96, "text": " And believe it or not, this is old, old news,"}, {"start": 39.96, "end": 43.6, "text": " because we have been able to do these for about three years now."}, {"start": 43.6, "end": 47.92, "text": " Of course, this was not perfect, the results were missing a lot of details,"}, {"start": 47.92, "end": 50.8, "text": " the frames of the video were quite a bit apart,"}, {"start": 50.8, "end": 53.239999999999995, "text": " artifacts were abundant, and more."}, {"start": 53.239999999999995, "end": 56.16, "text": " But this paper was a huge leap at the time,"}, {"start": 56.16, "end": 60.559999999999995, "text": " and today I wonder how far have we come in three years?"}, {"start": 60.559999999999995, "end": 64.84, "text": " What can we do now that we were not able to do three years ago?"}, {"start": 64.84, "end": 66.39999999999999, "text": " Let's have a look together."}, {"start": 66.39999999999999, "end": 70.44, "text": " This input video is much more detailed, a lot more continuous,"}, {"start": 70.44, "end": 72.92, "text": " so I wonder what we could do with this."}, {"start": 72.92, "end": 76.36, "text": " Well, hold on to your papers and..."}, {"start": 76.36, "end": 78.92, "text": " Woohoo! Look at that!"}, {"start": 78.92, "end": 83.0, "text": " One, we have changed the sky and put a spaceship in there."}, {"start": 83.0, "end": 86.16, "text": " That is already amazing, but look!"}, {"start": 86.16, "end": 89.04, "text": " Two, the spaceship is not stationary,"}, {"start": 89.04, "end": 93.4, "text": " but moves in harmony with the other objects in the video."}, {"start": 93.4, "end": 96.2, "text": " And three, since the sky has changed,"}, {"start": 96.2, "end": 98.76, "text": " the lighting situation has also changed,"}, {"start": 98.76, "end": 102.76, "text": " so the colors of the remainder of the image also have to change."}, {"start": 102.76, "end": 106.12, "text": " Let's see the before and after on that."}, {"start": 106.12, "end": 108.12, "text": " Yes, excellent!"}, {"start": 108.12, "end": 111.0, "text": " And we can do so much more with this, for instance,"}, {"start": 111.0, "end": 113.76, "text": " put a castle in the sky..."}, {"start": 113.76, "end": 115.4, "text": " Or, you know what?"}, {"start": 115.4, "end": 116.92, "text": " Let's think big."}, {"start": 116.92, "end": 119.36, "text": " Make it an extra planet, please."}, {"start": 119.36, "end": 121.68, "text": " Wow! Thank you!"}, {"start": 121.68, "end": 123.2, "text": " And, wait a minute."}, {"start": 123.2, "end": 127.52, "text": " If it is able to recolor the image after changing its surroundings,"}, {"start": 127.52, "end": 129.28, "text": " how well does it do it?"}, {"start": 129.28, "end": 132.16, "text": " Can we change the background to a dynamic one?"}, {"start": 132.16, "end": 134.84, "text": " What about a thunderstorm?"}, {"start": 134.84, "end": 135.72, "text": " Oh, yeah!"}, {"start": 135.72, "end": 137.72, "text": " That's as dynamic as it gets,"}, {"start": 137.72, "end": 141.0, "text": " and a new method handles this case beautifully."}, {"start": 141.0, "end": 145.56, "text": " And before we look under the hood to see how all this wizardry is done,"}, {"start": 145.56, "end": 147.72, "text": " let's list our expectations."}, {"start": 147.72, "end": 153.52, "text": " One, we expect that it has to know what picks us to change to load a different sky model."}, {"start": 153.52, "end": 158.44, "text": " Two, it should know how the image is changing and rotating over time."}, {"start": 158.44, "end": 161.92, "text": " And three, some recoloring has to take place."}, {"start": 161.92, "end": 166.44, "text": " Now let's have a look and see if we find the parts we are expecting."}, {"start": 166.44, "end": 169.84, "text": " And, yes, it has a sky-mating network."}, {"start": 169.84, "end": 172.88, "text": " This finds the parts of the image where the sky is."}, {"start": 172.88, "end": 177.16, "text": " There is the motion estimator that computes the optical flow of the image,"}, {"start": 177.16, "end": 180.07999999999998, "text": " distracts the movement of the sky over time,"}, {"start": 180.07999999999998, "end": 183.4, "text": " and there is the recoloring module as well."}, {"start": 183.4, "end": 184.56, "text": " So, there we go."}, {"start": 184.56, "end": 189.48, "text": " This can do not only sky replacement, but detailed weather and lighting synthesis"}, {"start": 189.48, "end": 191.96, "text": " is also possible for these videos."}, {"start": 191.96, "end": 193.8, "text": " What a time to be alive!"}, {"start": 193.8, "end": 198.76000000000002, "text": " Now, if you remember, first we looked at the results, listed our expectations,"}, {"start": 198.76000000000002, "end": 202.24, "text": " and then we looked at the architecture of this neural network."}, {"start": 202.24, "end": 205.04000000000002, "text": " This is a technique that I try to teach to my students"}, {"start": 205.04000000000002, "end": 209.16000000000003, "text": " in my Light Transport simulation course at the Technical University of Vienna,"}, {"start": 209.16000000000003, "end": 213.88000000000002, "text": " and I think if you check it out, you will be surprised by how effective it is."}, {"start": 213.88000000000002, "end": 217.76000000000002, "text": " Now, note that I was always teaching it to a handful of motivated students,"}, {"start": 217.76000000000002, "end": 222.48000000000002, "text": " and I believe that the teachings shouldn't only be available for the privileged few"}, {"start": 222.48, "end": 228.2, "text": " who can afford a college education, but the teachings should be available for everyone."}, {"start": 228.2, "end": 231.39999999999998, "text": " Free education for everyone, that's what I want."}, {"start": 231.39999999999998, "end": 235.72, "text": " So, the course is available free of charge for everyone, no strings attached,"}, {"start": 235.72, "end": 239.12, "text": " so make sure to click the link in the video description to get started."}, {"start": 239.12, "end": 242.39999999999998, "text": " We write a full-light simulation program from scratch there,"}, {"start": 242.39999999999998, "end": 246.16, "text": " and learn about physics, the world around us, and more."}, {"start": 246.16, "end": 249.92, "text": " And what is perhaps even more important is that I try to teach you"}, {"start": 249.92, "end": 254.16, "text": " a powerful way of understanding difficult mathematical concepts."}, {"start": 254.16, "end": 255.51999999999998, "text": " Make sure to have a look."}, {"start": 255.51999999999998, "end": 259.56, "text": " What you see here is a report of this exact paper we have talked about,"}, {"start": 259.56, "end": 261.84, "text": " which was made by Wades and Biasis."}, {"start": 261.84, "end": 263.8, "text": " I put a link to it in the description."}, {"start": 263.8, "end": 265.03999999999996, "text": " Make sure to have a look."}, {"start": 265.03999999999996, "end": 268.47999999999996, "text": " I think it helps you understand this paper better."}, {"start": 268.47999999999996, "end": 273.12, "text": " Wades and Biasis provides tools to track your experiments in your deep learning projects."}, {"start": 273.12, "end": 276.71999999999997, "text": " Their system is designed to save you a ton of time and money,"}, {"start": 276.72, "end": 280.0, "text": " and it is actively used in projects at prestigious labs,"}, {"start": 280.0, "end": 284.08000000000004, "text": " such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 284.08000000000004, "end": 288.44000000000005, "text": " And the best part is that Wades and Biasis is free for all individuals,"}, {"start": 288.44000000000005, "end": 290.76000000000005, "text": " academics, and open source projects."}, {"start": 290.76000000000005, "end": 293.32000000000005, "text": " It really is as good as it gets."}, {"start": 293.32000000000005, "end": 297.36, "text": " Make sure to visit them through wnb.com slash papers,"}, {"start": 297.36, "end": 300.04, "text": " or just click the link in the video description,"}, {"start": 300.04, "end": 302.20000000000005, "text": " and you can get the free demo today."}, {"start": 302.20000000000005, "end": 305.32000000000005, "text": " Our thanks to Wades and Biasis for their long-standing support"}, {"start": 305.32, "end": 307.96, "text": " and for helping us make better videos for you."}, {"start": 307.96, "end": 310.15999999999997, "text": " Thanks for watching and for your generous support,"}, {"start": 310.16, "end": 335.88, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=vfJz7WlRNk4
Near-Perfect Virtual Hands For Virtual Reality! 👐
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "MEgATrack: Monochrome Egocentric Articulated Hand-Tracking for Virtual Reality" is available here: https://research.fb.com/publications/megatrack-monochrome-egocentric-articulated-hand-tracking-for-virtual-reality/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-4949333/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #vr #metaverse
Dear Fellow Scholars, this is two minute papers with Dr. Karajjona Ifehr. The promise of virtual reality VR is indeed truly incredible. If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment. We could train better pilots with better flight simulators, expose astronauts to virtual zero gravity simulations, you name it. An important part of doing many of these is simulating walking in a virtual environment. You see, we can be located in a small room, put on a VR headset and enter a wonderful, expensive virtual world. However, as we start walking, we immediately experience a big problem. What is that problem? Well, we bump into things. As a remedy, we could make our virtual world smaller, but that would defeat the purpose. This earlier technique addresses this walking problem spectacularly by redirection. So, what is this redirection thing exactly? Redirection is a simple concept that changes our movement in the virtual world, so it deviates from a real path in the room, in a way that both lets us explore the virtual world, and not bump into walls and objects in reality in the meantime. Here you can see how the blue and orange lines deviate, which means that the algorithm is at work. With this, we can wonder about in a huge and majestic virtual landscape, or a cramped bar, even when being confined to a small physical room. Loving the idea. But there is more to interacting with virtual worlds than walking, for instance, look at this tech demo that requires more precise hand movements. How do we perform this? Well, the key is here. Controllers. Clearly, they work, but can we get rid of them? Can we just opt for a more natural solution and use our hands instead? Well, hold on to your papers, because this new work uses a learning-based algorithm to teach a head-mounted camera to tell the orientation of our hands at all times. Of course, the quality of the execution matters a great deal, so we have to ensure at least three things. One is that the hand tracking happens with minimal latency, which means that we see our actions immediately with minimal delay. Two, we need low jitter, which means that the key points of the reconstructed hand should not change too much from frame to frame. This happens a great deal with previous methods, and what about the new one? Oh, yes, much smoother. Checkmark. Note that the new method also remembers the history of the hand movement, and therefore can deal with difficult occlusion situations. For instance, look at the pinky here. A previous technique would not know what's going on with it, but this new one knows exactly what is going on, because it has information on what the hand was doing a moment ago. And three, this needs to work in all kinds of lighting conditions. Let's see if it can reconstruct a range of mythical creatures in poor lighting conditions. Yes, these ducks are reconstructed just as well as the mighty poke monster, and these scissors too. Bravo. So, what can we do with this? A great deal. For instance, we can type on a virtual keyboard, or implement all kinds of virtual user interfaces that we can interact with. We can also organize imaginary boxes, and of course, we can't leave out the two-minute papers favorite, going into a physics simulation, and playing with it. But of course, not everything is perfect here. Look, hand-hand interactions don't work so well, so folks who prefer virtual reality applications that include washing our hands should look elsewhere. But of course, one step at a time. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karajjona Ifehr."}, {"start": 4.32, "end": 9.36, "text": " The promise of virtual reality VR is indeed truly incredible."}, {"start": 9.36, "end": 15.68, "text": " If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment."}, {"start": 15.68, "end": 24.48, "text": " We could train better pilots with better flight simulators, expose astronauts to virtual zero gravity simulations, you name it."}, {"start": 24.48, "end": 29.84, "text": " An important part of doing many of these is simulating walking in a virtual environment."}, {"start": 29.84, "end": 38.08, "text": " You see, we can be located in a small room, put on a VR headset and enter a wonderful, expensive virtual world."}, {"start": 38.08, "end": 43.28, "text": " However, as we start walking, we immediately experience a big problem."}, {"start": 43.28, "end": 45.28, "text": " What is that problem?"}, {"start": 46.32, "end": 48.480000000000004, "text": " Well, we bump into things."}, {"start": 48.480000000000004, "end": 54.400000000000006, "text": " As a remedy, we could make our virtual world smaller, but that would defeat the purpose."}, {"start": 54.4, "end": 60.32, "text": " This earlier technique addresses this walking problem spectacularly by redirection."}, {"start": 60.32, "end": 63.2, "text": " So, what is this redirection thing exactly?"}, {"start": 63.2, "end": 67.75999999999999, "text": " Redirection is a simple concept that changes our movement in the virtual world,"}, {"start": 67.75999999999999, "end": 74.32, "text": " so it deviates from a real path in the room, in a way that both lets us explore the virtual world,"}, {"start": 74.32, "end": 78.96000000000001, "text": " and not bump into walls and objects in reality in the meantime."}, {"start": 78.96, "end": 84.8, "text": " Here you can see how the blue and orange lines deviate, which means that the algorithm is at work."}, {"start": 84.8, "end": 89.44, "text": " With this, we can wonder about in a huge and majestic virtual landscape,"}, {"start": 89.44, "end": 94.63999999999999, "text": " or a cramped bar, even when being confined to a small physical room."}, {"start": 94.63999999999999, "end": 96.16, "text": " Loving the idea."}, {"start": 96.16, "end": 100.39999999999999, "text": " But there is more to interacting with virtual worlds than walking,"}, {"start": 100.39999999999999, "end": 105.6, "text": " for instance, look at this tech demo that requires more precise hand movements."}, {"start": 105.6, "end": 106.8, "text": " How do we perform this?"}, {"start": 106.8, "end": 109.11999999999999, "text": " Well, the key is here."}, {"start": 109.11999999999999, "end": 110.16, "text": " Controllers."}, {"start": 110.16, "end": 113.52, "text": " Clearly, they work, but can we get rid of them?"}, {"start": 113.52, "end": 118.56, "text": " Can we just opt for a more natural solution and use our hands instead?"}, {"start": 118.56, "end": 123.84, "text": " Well, hold on to your papers, because this new work uses a learning-based algorithm"}, {"start": 123.84, "end": 129.2, "text": " to teach a head-mounted camera to tell the orientation of our hands at all times."}, {"start": 129.2, "end": 133.2, "text": " Of course, the quality of the execution matters a great deal,"}, {"start": 133.2, "end": 136.0, "text": " so we have to ensure at least three things."}, {"start": 136.0, "end": 140.16, "text": " One is that the hand tracking happens with minimal latency,"}, {"start": 140.16, "end": 144.08, "text": " which means that we see our actions immediately with minimal delay."}, {"start": 144.08, "end": 149.52, "text": " Two, we need low jitter, which means that the key points of the reconstructed hand"}, {"start": 149.52, "end": 152.24, "text": " should not change too much from frame to frame."}, {"start": 152.24, "end": 154.72, "text": " This happens a great deal with previous methods,"}, {"start": 155.6, "end": 157.28, "text": " and what about the new one?"}, {"start": 158.8, "end": 160.56, "text": " Oh, yes, much smoother."}, {"start": 161.2, "end": 162.24, "text": " Checkmark."}, {"start": 162.24, "end": 166.08, "text": " Note that the new method also remembers the history of the hand movement,"}, {"start": 166.08, "end": 169.52, "text": " and therefore can deal with difficult occlusion situations."}, {"start": 169.52, "end": 171.76000000000002, "text": " For instance, look at the pinky here."}, {"start": 171.76000000000002, "end": 174.72, "text": " A previous technique would not know what's going on with it,"}, {"start": 174.72, "end": 177.84, "text": " but this new one knows exactly what is going on,"}, {"start": 177.84, "end": 181.60000000000002, "text": " because it has information on what the hand was doing a moment ago."}, {"start": 182.4, "end": 186.4, "text": " And three, this needs to work in all kinds of lighting conditions."}, {"start": 186.4, "end": 192.56, "text": " Let's see if it can reconstruct a range of mythical creatures in poor lighting conditions."}, {"start": 192.56, "end": 197.28, "text": " Yes, these ducks are reconstructed just as well as the mighty poke monster,"}, {"start": 199.76, "end": 201.6, "text": " and these scissors too."}, {"start": 201.6, "end": 202.08, "text": " Bravo."}, {"start": 202.72, "end": 204.16, "text": " So, what can we do with this?"}, {"start": 204.8, "end": 205.92000000000002, "text": " A great deal."}, {"start": 205.92000000000002, "end": 208.8, "text": " For instance, we can type on a virtual keyboard,"}, {"start": 208.8, "end": 213.28, "text": " or implement all kinds of virtual user interfaces that we can interact with."}, {"start": 213.28, "end": 216.16, "text": " We can also organize imaginary boxes,"}, {"start": 216.96, "end": 221.2, "text": " and of course, we can't leave out the two-minute papers favorite,"}, {"start": 221.2, "end": 224.64, "text": " going into a physics simulation, and playing with it."}, {"start": 224.64, "end": 228.56, "text": " But of course, not everything is perfect here."}, {"start": 228.56, "end": 232.08, "text": " Look, hand-hand interactions don't work so well,"}, {"start": 232.08, "end": 238.0, "text": " so folks who prefer virtual reality applications that include washing our hands should look elsewhere."}, {"start": 238.0, "end": 240.56, "text": " But of course, one step at a time."}, {"start": 240.56, "end": 244.72, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 244.72, "end": 250.64000000000001, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 250.64000000000001, "end": 257.52, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances,"}, {"start": 257.52, "end": 265.12, "text": " and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 265.12, "end": 270.24, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 270.24, "end": 274.96000000000004, "text": " Join researchers at organizations like Apple, MIT, and Caltech"}, {"start": 274.96000000000004, "end": 278.72, "text": " in using Lambda Cloud instances, workstations, or servers."}, {"start": 278.72, "end": 285.44, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 285.44, "end": 288.24, "text": " Our thanks to Lambda for their long-standing support,"}, {"start": 288.24, "end": 291.12, "text": " and for helping us make better videos for you."}, {"start": 291.12, "end": 303.52, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=atzPvW95ahQ
Is Videoconferencing With Smart Glasses Possible? 👓
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/egocentric-video-conferencing/reports/Overview-Egocentric-Videoconferencing--VmlldzozMTY1NTA 📝 The paper "Egocentric Videoconferencing" is available here: http://gvv.mpi-inf.mpg.de/projects/EgoChat/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-820390/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjola Ifehir. Today we are going to have a look at the state of egocentric video conferencing. Now this doesn't mean that only we get to speak during a meeting, it means that we are wearing a camera which looks like this, and the goal is to use a learning algorithm to synthesize this frontal view of us. Now note that what you see here is the recorded reference footage, this is reality and this would need to be somehow synthesized by the algorithm. If we could pull that off, we could add a low-cost egocentric camera to smart glasses and it could pretend to see us from the front which would be amazing for hands-free video conferencing. That would be insanity. But wait a second, how is this even possible? For us to even have a fighting chance, there are four major problems to overcome here. One, this camera lens is very close to us which means that it doesn't see the entirety of the face. That sounds extremely challenging. And if that wasn't bad enough, two, we also have tons of distortion in the images or in other words, things don't look like they look in reality, we would have to account for that two. Three, it would also have to take into account our current expression, gaze, blinking, and more. Oh boy, and finally, four, the output needs to be photorealistic or even better video realistic. Remember, we don't just need one image, but a continuously moving video output. So the problem is, once again, input, egocentric view, output, synthesized, frontal view. This is the reference footage, reality, if you will, and now, let's see how this learning based algorithm is able to reconstruct it. Hello, is this a mistake? They look identical. As if they were just copied here. No, you will see in a moment that it's not a mistake. This means that the AI is giving us a nearly perfect reconstruction of the remainder of the human face. That is absolutely amazing. Now, it is still not perfect. There are some differences. So how do we get a good feel of where the inaccuracies are? The answer is a difference image. Look, regions with warmer colors indicate where the reconstruction is inaccurate compared to the real reference footage. For instance, with an earlier method by the name, picks to picks, the hair and the beard are doing fine, while we have quite a bit of reconstruction error on the remainder of the face. So, did the new method do better than this? Let's have a look together. Oh yeah, it does much better across the entirety of the face. It still has some trouble with the cable and the glasses, but otherwise, this is a clean, clean image. Bravo. Now, we talked about the challenge of reconstructing expressions correctly. To be able to read the other person is of utmost importance during a video conference. So, how good is it at gestures? Well, let's put it through an intense stress test. This is as intense as it gets without having access to Jim Carey as a test subject, I suppose. And I bet there was a lot of fun to be had in the lab on this day. And the results are outstanding. Especially if we compare it again to the picks to picks technique from 2017. I love this idea. Because if we can overcome the huge shortcomings of the egocentric camera, in return, we get an excellent view of subtle facial expressions and can deal with the tiniest eye movements, twitches, tongue movements, and more. And it really shows in the results. Now, please note that this technique needs to be trained on each of these test subjects. About 4 minutes of video footage is fine and this calibration process only needs to be done once. So, once again, the technique knows these people and had seen them before. But in return, it can do even more. If all of this is synthesized, we have a lot of control over the data and the AI understands what much of this data means. So, with all that extra knowledge, what else can we do with this footage? For instance, we can not just reconstruct, but create arbitrary head movement. We can guess what the real head movement is because we have a view of the background. We can simply remove it. Or from the movement of the background, we can infer what kind of head movement is taking place. And what's even better, we can not only get control over the head movement and change it, but even remove the movement from the footage altogether. And we can also remove the glasses and pretend to have dressed properly for an occasion. How cool is that? Now, make no mistake, this paper contains a ton of comparisons against a variety of other works as well. Here are some, but make sure to check them all out in the video description. Now, of course, even this new method isn't perfect, it does not work all that well in low light situations, but of course, let's leave something to improve for the next paper down the line. And hopefully, in the near future, we will be able to seamlessly getting contact with our loved ones through smart glasses and egocentric cameras. What a time to be alive! What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjola Ifehir."}, {"start": 4.4, "end": 9.4, "text": " Today we are going to have a look at the state of egocentric video conferencing."}, {"start": 9.4, "end": 12.9, "text": " Now this doesn't mean that only we get to speak during a meeting,"}, {"start": 12.9, "end": 16.6, "text": " it means that we are wearing a camera which looks like this,"}, {"start": 16.6, "end": 22.0, "text": " and the goal is to use a learning algorithm to synthesize this frontal view of us."}, {"start": 22.0, "end": 25.7, "text": " Now note that what you see here is the recorded reference footage,"}, {"start": 25.7, "end": 31.2, "text": " this is reality and this would need to be somehow synthesized by the algorithm."}, {"start": 31.2, "end": 36.4, "text": " If we could pull that off, we could add a low-cost egocentric camera to smart glasses"}, {"start": 36.4, "end": 42.3, "text": " and it could pretend to see us from the front which would be amazing for hands-free video conferencing."}, {"start": 42.3, "end": 44.3, "text": " That would be insanity."}, {"start": 44.3, "end": 48.3, "text": " But wait a second, how is this even possible?"}, {"start": 48.3, "end": 53.4, "text": " For us to even have a fighting chance, there are four major problems to overcome here."}, {"start": 53.4, "end": 59.9, "text": " One, this camera lens is very close to us which means that it doesn't see the entirety of the face."}, {"start": 59.9, "end": 62.2, "text": " That sounds extremely challenging."}, {"start": 62.2, "end": 67.4, "text": " And if that wasn't bad enough, two, we also have tons of distortion in the images"}, {"start": 67.4, "end": 71.5, "text": " or in other words, things don't look like they look in reality,"}, {"start": 71.5, "end": 74.1, "text": " we would have to account for that two."}, {"start": 74.1, "end": 78.2, "text": " Three, it would also have to take into account our current expression,"}, {"start": 78.2, "end": 80.9, "text": " gaze, blinking, and more."}, {"start": 80.9, "end": 88.7, "text": " Oh boy, and finally, four, the output needs to be photorealistic or even better video realistic."}, {"start": 88.7, "end": 94.2, "text": " Remember, we don't just need one image, but a continuously moving video output."}, {"start": 94.2, "end": 101.0, "text": " So the problem is, once again, input, egocentric view, output, synthesized, frontal view."}, {"start": 101.0, "end": 105.0, "text": " This is the reference footage, reality, if you will, and now,"}, {"start": 105.0, "end": 111.6, "text": " let's see how this learning based algorithm is able to reconstruct it."}, {"start": 111.6, "end": 114.1, "text": " Hello, is this a mistake?"}, {"start": 114.1, "end": 115.7, "text": " They look identical."}, {"start": 115.7, "end": 117.7, "text": " As if they were just copied here."}, {"start": 117.7, "end": 120.6, "text": " No, you will see in a moment that it's not a mistake."}, {"start": 120.6, "end": 127.1, "text": " This means that the AI is giving us a nearly perfect reconstruction of the remainder of the human face."}, {"start": 127.1, "end": 129.3, "text": " That is absolutely amazing."}, {"start": 129.3, "end": 131.1, "text": " Now, it is still not perfect."}, {"start": 131.1, "end": 132.8, "text": " There are some differences."}, {"start": 132.8, "end": 136.60000000000002, "text": " So how do we get a good feel of where the inaccuracies are?"}, {"start": 136.60000000000002, "end": 138.9, "text": " The answer is a difference image."}, {"start": 138.9, "end": 146.10000000000002, "text": " Look, regions with warmer colors indicate where the reconstruction is inaccurate compared to the real reference footage."}, {"start": 146.10000000000002, "end": 149.60000000000002, "text": " For instance, with an earlier method by the name, picks to picks,"}, {"start": 149.60000000000002, "end": 156.4, "text": " the hair and the beard are doing fine, while we have quite a bit of reconstruction error on the remainder of the face."}, {"start": 156.4, "end": 159.60000000000002, "text": " So, did the new method do better than this?"}, {"start": 159.60000000000002, "end": 161.8, "text": " Let's have a look together."}, {"start": 161.8, "end": 166.60000000000002, "text": " Oh yeah, it does much better across the entirety of the face."}, {"start": 166.60000000000002, "end": 172.4, "text": " It still has some trouble with the cable and the glasses, but otherwise, this is a clean, clean image."}, {"start": 172.4, "end": 173.4, "text": " Bravo."}, {"start": 173.4, "end": 177.9, "text": " Now, we talked about the challenge of reconstructing expressions correctly."}, {"start": 177.9, "end": 182.70000000000002, "text": " To be able to read the other person is of utmost importance during a video conference."}, {"start": 182.70000000000002, "end": 185.20000000000002, "text": " So, how good is it at gestures?"}, {"start": 185.20000000000002, "end": 188.70000000000002, "text": " Well, let's put it through an intense stress test."}, {"start": 188.7, "end": 194.6, "text": " This is as intense as it gets without having access to Jim Carey as a test subject, I suppose."}, {"start": 194.6, "end": 198.6, "text": " And I bet there was a lot of fun to be had in the lab on this day."}, {"start": 198.6, "end": 200.7, "text": " And the results are outstanding."}, {"start": 200.7, "end": 205.5, "text": " Especially if we compare it again to the picks to picks technique from 2017."}, {"start": 205.5, "end": 207.0, "text": " I love this idea."}, {"start": 207.0, "end": 211.6, "text": " Because if we can overcome the huge shortcomings of the egocentric camera,"}, {"start": 211.6, "end": 215.79999999999998, "text": " in return, we get an excellent view of subtle facial expressions"}, {"start": 215.8, "end": 221.70000000000002, "text": " and can deal with the tiniest eye movements, twitches, tongue movements, and more."}, {"start": 221.70000000000002, "end": 224.0, "text": " And it really shows in the results."}, {"start": 224.0, "end": 228.70000000000002, "text": " Now, please note that this technique needs to be trained on each of these test subjects."}, {"start": 228.70000000000002, "end": 234.9, "text": " About 4 minutes of video footage is fine and this calibration process only needs to be done once."}, {"start": 234.9, "end": 239.4, "text": " So, once again, the technique knows these people and had seen them before."}, {"start": 239.4, "end": 242.4, "text": " But in return, it can do even more."}, {"start": 242.4, "end": 246.70000000000002, "text": " If all of this is synthesized, we have a lot of control over the data"}, {"start": 246.70000000000002, "end": 250.6, "text": " and the AI understands what much of this data means."}, {"start": 250.6, "end": 255.0, "text": " So, with all that extra knowledge, what else can we do with this footage?"}, {"start": 255.0, "end": 260.2, "text": " For instance, we can not just reconstruct, but create arbitrary head movement."}, {"start": 260.2, "end": 264.9, "text": " We can guess what the real head movement is because we have a view of the background."}, {"start": 264.9, "end": 266.5, "text": " We can simply remove it."}, {"start": 266.5, "end": 272.2, "text": " Or from the movement of the background, we can infer what kind of head movement is taking place."}, {"start": 272.2, "end": 277.4, "text": " And what's even better, we can not only get control over the head movement and change it,"}, {"start": 277.4, "end": 283.4, "text": " but even remove the movement from the footage altogether."}, {"start": 283.4, "end": 288.8, "text": " And we can also remove the glasses and pretend to have dressed properly for an occasion."}, {"start": 288.8, "end": 290.59999999999997, "text": " How cool is that?"}, {"start": 290.59999999999997, "end": 296.8, "text": " Now, make no mistake, this paper contains a ton of comparisons against a variety of other works as well."}, {"start": 296.8, "end": 300.9, "text": " Here are some, but make sure to check them all out in the video description."}, {"start": 300.9, "end": 304.0, "text": " Now, of course, even this new method isn't perfect,"}, {"start": 304.0, "end": 307.2, "text": " it does not work all that well in low light situations,"}, {"start": 307.2, "end": 311.59999999999997, "text": " but of course, let's leave something to improve for the next paper down the line."}, {"start": 311.59999999999997, "end": 315.0, "text": " And hopefully, in the near future, we will be able to seamlessly"}, {"start": 315.0, "end": 319.7, "text": " getting contact with our loved ones through smart glasses and egocentric cameras."}, {"start": 319.7, "end": 321.79999999999995, "text": " What a time to be alive!"}, {"start": 321.79999999999995, "end": 325.9, "text": " What you see here is a report of this exact paper we have talked about"}, {"start": 325.9, "end": 328.2, "text": " which was made by weights and biases."}, {"start": 328.2, "end": 331.4, "text": " I put a link to it in the description, make sure to have a look,"}, {"start": 331.4, "end": 334.59999999999997, "text": " I think it helps you understand this paper better."}, {"start": 334.59999999999997, "end": 337.8, "text": " If you work with learning algorithms on a regular basis,"}, {"start": 337.8, "end": 340.4, "text": " make sure to check out weights and biases."}, {"start": 340.4, "end": 343.7, "text": " Their system is designed to help you organize your experiments"}, {"start": 343.7, "end": 349.09999999999997, "text": " and it is so good it could shave off weeks or even months of work from your projects"}, {"start": 349.09999999999997, "end": 354.59999999999997, "text": " and is completely free for all individuals, academics and open source projects."}, {"start": 354.59999999999997, "end": 357.0, "text": " This really is as good as it gets"}, {"start": 357.0, "end": 361.3, "text": " and it is hardly a surprise that they are now used by over 200 companies"}, {"start": 361.3, "end": 363.1, "text": " and research institutions."}, {"start": 363.1, "end": 367.3, "text": " Make sure to visit them through wnb.com slash papers"}, {"start": 367.3, "end": 369.7, "text": " or just click the link in the video description"}, {"start": 369.7, "end": 371.9, "text": " and you can get a free demo today."}, {"start": 371.9, "end": 375.2, "text": " Our thanks to weights and biases for their longstanding support"}, {"start": 375.2, "end": 377.9, "text": " and for helping us make better videos for you."}, {"start": 377.9, "end": 380.1, "text": " Thanks for watching and for your generous support"}, {"start": 380.1, "end": 390.1, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=tWu0AWdaTTs
This AI Makes Puzzle Solving Look Easy! 🧩
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/lavanyashukla/visualize-sklearn/reports/Visualize-Sklearn-Model-Performance--Vmlldzo0ODIzNg 📝 The paper "C-Space Tunnel Discovery for Puzzle Path Planning" is available here: https://xinyazhang.gitlab.io/puzzletunneldiscovery/ https://github.com/xinyazhang/PuzzleTunnelDiscovery 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to immerse ourselves into the wonderful art of rigid body, disentanglement, or, in simpler words, we are going to solve these puzzles. To be more exact, we'll sit and enjoy our time while a learning-based algorithm is going to miraculously solve some really challenging puzzles. Yes, this is going to be a spicy paper. Now, these puzzles are roughly ordered by difficulty, where the easier ones are on the left, and as we traverse to the right, things get more and more difficult. And get this, this new technique is able to algorithmically solve all of them. If you are like me, and you don't believe a word of what's been said, let's see if it lives up to its promise by solving three of them in increasing difficulty. Let's start with an easy one. Example number one. Here, the algorithm recognizes that we need to pull the circular part of the red piece through the blue one, and apply the appropriate rotations to make sure that we don't get stuck during this process. While we finish this sequence, please note that this video contains spoilers for some well-known puzzles. If you wish to experience them yourself, pause this video, and, I guess, buy and try them. This one was good enough to warm ourselves up, so let's hop on to the next one. Example number two. The duet puzzle. Well, that was quite a bit of a jump in difficulty, because we seem stuck right at the start. Hmm, this seems flat out impossible. Until the algorithm recognizes that there are these small notches in the puzzle, and if we rotate the red piece correctly, we may go from one cell to the next one. Great, so now it's not impossible anymore, it is more like a maze that we have to climb through. But the challenges are still not over. Where is the end point? How do we finish this puzzle? There has to be a notch on the side. Yes, there is this one, or this one, so we ultimately have to end up in one of these places. And there we go. Bravo. Experiment number three. My favorite, the enigma. Hmm, this is exactly the opposite of the previous one. It looks so easy. Just get the tiny opening on the red piece onto this part of the blue one, and we are done. Uh-oh. The curved part is in the way. Something that seems so easy now suddenly seems absolutely impossible. But fortunately, this learning algorithm does not get discouraged, and does not know what impossible is, and it finds this tricky series of rotations to go around the entirety of the blue piece and then finish the puzzle. Glorious. What a roller coaster, a hallmark of an expertly designed puzzle. And to experience more of these puzzles, make sure to have a look at the paper and its website with a super fun interactive app. If you do, you will also learn really cool new things, for instance, if you get back home in the middle of the night after a long day and two of your keys are stuck together, you will know exactly how to handle it. And all this can be done through the power of machine learning and computer graphics research. What a time to be alive. So, how does this wizardry work exactly? The key techniques here are tunnel discovery and path planning. First, a neural network looks at the puzzles and identifies where the gaps and notches are and specifies the starting position and the goal position that we need to achieve to finish the puzzle. Then, a set of collision-free key configurations are identified, after which the blooming step can commence. So, what does that do? Well, the goal is to be able to go through these narrow tunnels that represent tricky steps in the puzzles that typically require some unintuitive rotations. These are typically the most challenging parts of the puzzles, and the blooming step starts from these narrow tunnels and helps us reach the bigger bubbles of the puzzle. But as you see, not all roads connect, or at least not easily. The forest-connect step tries to connect these roads through collision-free paths, and now, finally, all we have to do is find the shortest path from the start to the end point to solve the puzzle. And also, according to my internal numbering system, this is two-minute papers, Episode number 478. And to every single one of you fellow scholars who are watching this, thank you so much to all of you for being with us for so long on this incredible journey. Man, I love my job, and I jump out of bed full of energy, and joy, knowing that I get to read research papers, and flip out together with many of you fellow scholars on a regular basis. Thank you. This episode has been supported by weights and biases. In this post, they show you how to visualize your psychic learned models with just a few lines of code. Look at all those beautiful visualizations. So good! You can even try an example in an interactive notebook through the link in the video description. During my PhD studies, I trained a ton of neural networks, which were used in our experiments. However, over time, there was just too much data in our repositories, and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 10.24, "text": " Today, we are going to immerse ourselves into the wonderful art of rigid body, disentanglement,"}, {"start": 10.24, "end": 13.76, "text": " or, in simpler words, we are going to solve these puzzles."}, {"start": 13.76, "end": 18.72, "text": " To be more exact, we'll sit and enjoy our time while a learning-based algorithm"}, {"start": 18.72, "end": 22.72, "text": " is going to miraculously solve some really challenging puzzles."}, {"start": 22.72, "end": 25.6, "text": " Yes, this is going to be a spicy paper."}, {"start": 25.6, "end": 30.720000000000002, "text": " Now, these puzzles are roughly ordered by difficulty, where the easier ones are on the left,"}, {"start": 30.720000000000002, "end": 34.480000000000004, "text": " and as we traverse to the right, things get more and more difficult."}, {"start": 34.480000000000004, "end": 39.52, "text": " And get this, this new technique is able to algorithmically solve all of them."}, {"start": 39.52, "end": 42.96, "text": " If you are like me, and you don't believe a word of what's been said,"}, {"start": 42.96, "end": 48.08, "text": " let's see if it lives up to its promise by solving three of them in increasing difficulty."}, {"start": 49.040000000000006, "end": 50.72, "text": " Let's start with an easy one."}, {"start": 50.72, "end": 52.480000000000004, "text": " Example number one."}, {"start": 52.48, "end": 57.519999999999996, "text": " Here, the algorithm recognizes that we need to pull the circular part of the red piece through"}, {"start": 57.519999999999996, "end": 63.839999999999996, "text": " the blue one, and apply the appropriate rotations to make sure that we don't get stuck during this process."}, {"start": 63.839999999999996, "end": 69.36, "text": " While we finish this sequence, please note that this video contains spoilers for some well-known puzzles."}, {"start": 69.36, "end": 75.44, "text": " If you wish to experience them yourself, pause this video, and, I guess, buy and try them."}, {"start": 75.44, "end": 80.0, "text": " This one was good enough to warm ourselves up, so let's hop on to the next one."}, {"start": 80.0, "end": 81.75999999999999, "text": " Example number two."}, {"start": 81.76, "end": 83.12, "text": " The duet puzzle."}, {"start": 83.12, "end": 88.24000000000001, "text": " Well, that was quite a bit of a jump in difficulty, because we seem stuck right at the start."}, {"start": 88.88000000000001, "end": 91.92, "text": " Hmm, this seems flat out impossible."}, {"start": 92.48, "end": 97.2, "text": " Until the algorithm recognizes that there are these small notches in the puzzle,"}, {"start": 97.2, "end": 101.84, "text": " and if we rotate the red piece correctly, we may go from one cell to the next one."}, {"start": 103.68, "end": 109.2, "text": " Great, so now it's not impossible anymore, it is more like a maze that we have to climb through."}, {"start": 109.2, "end": 113.36, "text": " But the challenges are still not over. Where is the end point?"}, {"start": 113.36, "end": 115.12, "text": " How do we finish this puzzle?"}, {"start": 115.12, "end": 117.36, "text": " There has to be a notch on the side."}, {"start": 117.36, "end": 124.08, "text": " Yes, there is this one, or this one, so we ultimately have to end up in one of these places."}, {"start": 130.08, "end": 132.56, "text": " And there we go. Bravo."}, {"start": 132.56, "end": 135.04, "text": " Experiment number three."}, {"start": 135.04, "end": 137.04, "text": " My favorite, the enigma."}, {"start": 137.04, "end": 141.76, "text": " Hmm, this is exactly the opposite of the previous one."}, {"start": 141.76, "end": 143.51999999999998, "text": " It looks so easy."}, {"start": 143.51999999999998, "end": 148.48, "text": " Just get the tiny opening on the red piece onto this part of the blue one, and we are done."}, {"start": 151.51999999999998, "end": 152.0, "text": " Uh-oh."}, {"start": 152.56, "end": 154.39999999999998, "text": " The curved part is in the way."}, {"start": 155.12, "end": 160.64, "text": " Something that seems so easy now suddenly seems absolutely impossible."}, {"start": 160.64, "end": 164.16, "text": " But fortunately, this learning algorithm does not get discouraged,"}, {"start": 164.16, "end": 169.52, "text": " and does not know what impossible is, and it finds this tricky series of rotations"}, {"start": 169.52, "end": 174.0, "text": " to go around the entirety of the blue piece and then finish the puzzle."}, {"start": 174.8, "end": 175.76, "text": " Glorious."}, {"start": 175.76, "end": 179.92, "text": " What a roller coaster, a hallmark of an expertly designed puzzle."}, {"start": 180.64, "end": 185.35999999999999, "text": " And to experience more of these puzzles, make sure to have a look at the paper and its website"}, {"start": 185.35999999999999, "end": 187.68, "text": " with a super fun interactive app."}, {"start": 187.68, "end": 192.72, "text": " If you do, you will also learn really cool new things, for instance, if you get back home"}, {"start": 192.72, "end": 197.84, "text": " in the middle of the night after a long day and two of your keys are stuck together,"}, {"start": 197.84, "end": 200.4, "text": " you will know exactly how to handle it."}, {"start": 200.4, "end": 205.92, "text": " And all this can be done through the power of machine learning and computer graphics research."}, {"start": 205.92, "end": 207.6, "text": " What a time to be alive."}, {"start": 208.24, "end": 211.36, "text": " So, how does this wizardry work exactly?"}, {"start": 211.36, "end": 215.2, "text": " The key techniques here are tunnel discovery and path planning."}, {"start": 215.2, "end": 220.64, "text": " First, a neural network looks at the puzzles and identifies where the gaps and notches are"}, {"start": 220.64, "end": 225.83999999999997, "text": " and specifies the starting position and the goal position that we need to achieve to finish the puzzle."}, {"start": 226.55999999999997, "end": 230.88, "text": " Then, a set of collision-free key configurations are identified,"}, {"start": 230.88, "end": 233.35999999999999, "text": " after which the blooming step can commence."}, {"start": 234.23999999999998, "end": 235.51999999999998, "text": " So, what does that do?"}, {"start": 236.16, "end": 241.35999999999999, "text": " Well, the goal is to be able to go through these narrow tunnels that represent tricky steps"}, {"start": 241.35999999999999, "end": 245.2, "text": " in the puzzles that typically require some unintuitive rotations."}, {"start": 245.83999999999997, "end": 249.11999999999998, "text": " These are typically the most challenging parts of the puzzles,"}, {"start": 249.12, "end": 255.28, "text": " and the blooming step starts from these narrow tunnels and helps us reach the bigger bubbles of the puzzle."}, {"start": 255.28, "end": 259.52, "text": " But as you see, not all roads connect, or at least not easily."}, {"start": 260.16, "end": 264.8, "text": " The forest-connect step tries to connect these roads through collision-free paths,"}, {"start": 264.8, "end": 271.36, "text": " and now, finally, all we have to do is find the shortest path from the start to the end point to"}, {"start": 271.36, "end": 272.32, "text": " solve the puzzle."}, {"start": 272.72, "end": 277.92, "text": " And also, according to my internal numbering system, this is two-minute papers,"}, {"start": 277.92, "end": 280.48, "text": " Episode number 478."}, {"start": 280.48, "end": 283.92, "text": " And to every single one of you fellow scholars who are watching this,"}, {"start": 283.92, "end": 289.68, "text": " thank you so much to all of you for being with us for so long on this incredible journey."}, {"start": 289.68, "end": 294.0, "text": " Man, I love my job, and I jump out of bed full of energy,"}, {"start": 294.0, "end": 299.44, "text": " and joy, knowing that I get to read research papers, and flip out together with many of you fellow"}, {"start": 299.44, "end": 302.08000000000004, "text": " scholars on a regular basis. Thank you."}, {"start": 302.64, "end": 305.92, "text": " This episode has been supported by weights and biases."}, {"start": 305.92, "end": 311.92, "text": " In this post, they show you how to visualize your psychic learned models with just a few lines of code."}, {"start": 311.92, "end": 314.88, "text": " Look at all those beautiful visualizations."}, {"start": 314.88, "end": 315.84000000000003, "text": " So good!"}, {"start": 315.84000000000003, "end": 320.72, "text": " You can even try an example in an interactive notebook through the link in the video description."}, {"start": 321.28000000000003, "end": 326.72, "text": " During my PhD studies, I trained a ton of neural networks, which were used in our experiments."}, {"start": 326.72, "end": 332.40000000000003, "text": " However, over time, there was just too much data in our repositories, and what I am looking for"}, {"start": 332.4, "end": 339.12, "text": " is not data, but insight. And that's exactly how weights and biases helps you by organizing your"}, {"start": 339.12, "end": 345.35999999999996, "text": " experiments. It is used by more than 200 companies and research institutions, including OpenAI,"}, {"start": 345.35999999999996, "end": 352.4, "text": " Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals,"}, {"start": 352.4, "end": 359.28, "text": " academics, and open source projects. Make sure to visit them through wnb.com slash papers,"}, {"start": 359.28, "end": 364.15999999999997, "text": " or just click the link in the video description, and you can get a free demo today."}, {"start": 364.15999999999997, "end": 369.28, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better"}, {"start": 369.28, "end": 398.23999999999995, "text": " videos for you. Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=lLa9DUiJICk
Making Talking Memes With Voice DeepFakes!
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Wav2Lip: Accurately Lip-syncing Videos In The Wild" is available here: - Paper: https://arxiv.org/abs/2008.10010 - Try it out! - https://github.com/Rudrabha/Wav2Lip More results are available on our Instagram page! - https://www.instagram.com/twominutepapers/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #deepfake
Dear Fellow Scholars, hold onto your papers because we have an emergency situation. Everybody can make deepfakes now by recording a voice sample such as this one and the lips of the target subject will move as if they themselves were saying this. Not bad huh? And now let's see what else this new method can do. First let's watch this short clip of a speech and make sure to pay attention to the fact that the louder voice is the English translator and if you pay attention you can hear the chancellor's original voice in the background too. Framework for our future cooperation. Day Manuel. Only 8 months ago you were awarded the... So what is the problem here? It's strictly speaking, there is no problem here, this is just the way the speech was recorded. However, what if we could recreate this video in a way that the chancellor's lips would be synced not to her own voice but to the voice of the English interpreter? This would give an impression as if the speech was given in English and the video content would follow what we hear. Now that sounds like something straight out of a science fiction movie, perhaps even with today's advanced machine learning techniques, but let's see if it's possible. This is a state of the art technique from last year that attempts to perform this. Um, development in a very different history. We signed this on the 56th anniversary of the illicit treaty of 1966. Hmm, there are extraneous lip movements which are the remnants of the original video so much so that she seems to be giving two speeches at the same time, not too convincing. So is this not possible to pull off? Well, now hold onto your papers and let's see how this new paper does at the same problem. On Germany but also for a starting point of a very different development in a very different history. We signed this on the 56th anniversary of the illicit treaty of 9th. Wow, now that's significantly better. The remnants of the previous speech are still there, but the footage is much, much more convincing. What's even better is that the previous technique was published just one year ago by the same research group. Such a great leap in just one year. My goodness. So apparently this is possible. But I would like to see another example just to make sure. That's what he'd like to on the first issue, it's not for me to comment on the visit of Madame. Checkmark. So far this is an amazing leap, but believe it or not, this is just one of the easier applications of the new model. So let's see what acid can do. For instance, many of us are sitting at home yearning for some learning materials, but the vast majority of these were recorded in only one language. Before if we could read up famous lectures into many other languages. The computer vision is moving on the basis of the deep learning very quickly. For example, we use the same color as the car now to make a lot of things. Look at that. Any lecture could be available in any language and look as if they were originally recorded in these foreign languages as long as someone says the words, which can also be kind of automated through speech synthesis these days. So, it clearly works on real characters, but are you thinking what I am thinking? Three, what about lip syncing animated characters? Imagine if a line has to be changed in a Disney movie, can we synthesize new video footage without calling in the animators for yet another all-nighter? Let's give it a try. I'll go around. Wait for my call. Indeed we can, loving it. Let's do one more. Four, of course we have a lot of these meme gifts on the internet. What about redubbing those with an arbitrary line of our choice? Yup, that is indeed also possible. Well done. And imagine that this is such a leap, just one more work down the line from the 2019 paper, I can only imagine what results we will see one more paper down the line. It not only does what it does better, but it can also be applied to a multitude of problems. What a time to be alive. When we look under the hood, we see that the two key components that enable this wizardry are here and here. So what does this mean exactly? It means that we jointly improve the quality of the lip syncing and the visual quality of the video. These two modules curate the results offered by the main generator neural network and reject solutions that don't have enough detail or don't match the speech that we hear and thereby they steer it towards much higher quality solutions. If we continue this training process for 29 hours for the lip syncing discriminator, we get these incredible results. Now let's have a quick look at the user study and humans appear to almost never prefer the older method compared to this one. I tend to agree. If you consider these forgeries to be defects, then there you go. Useful defects that can potentially help people around the world stranded at home to study and improve themselves. Imagine what good this could do. Well done. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 12.22, "text": " Dear Fellow Scholars, hold onto your papers because we have an emergency situation."}, {"start": 12.22, "end": 17.36, "text": " Everybody can make deepfakes now by recording a voice sample such as this one and the lips"}, {"start": 17.36, "end": 22.32, "text": " of the target subject will move as if they themselves were saying this."}, {"start": 22.32, "end": 24.400000000000002, "text": " Not bad huh?"}, {"start": 24.400000000000002, "end": 27.8, "text": " And now let's see what else this new method can do."}, {"start": 27.8, "end": 33.120000000000005, "text": " First let's watch this short clip of a speech and make sure to pay attention to the fact"}, {"start": 33.120000000000005, "end": 38.2, "text": " that the louder voice is the English translator and if you pay attention you can hear the"}, {"start": 38.2, "end": 41.4, "text": " chancellor's original voice in the background too."}, {"start": 41.4, "end": 43.92, "text": " Framework for our future cooperation."}, {"start": 43.92, "end": 44.92, "text": " Day Manuel."}, {"start": 44.92, "end": 52.68, "text": " Only 8 months ago you were awarded the..."}, {"start": 52.68, "end": 54.72, "text": " So what is the problem here?"}, {"start": 54.72, "end": 59.879999999999995, "text": " It's strictly speaking, there is no problem here, this is just the way the speech was recorded."}, {"start": 59.879999999999995, "end": 65.1, "text": " However, what if we could recreate this video in a way that the chancellor's lips would"}, {"start": 65.1, "end": 70.28, "text": " be synced not to her own voice but to the voice of the English interpreter?"}, {"start": 70.28, "end": 75.4, "text": " This would give an impression as if the speech was given in English and the video content"}, {"start": 75.4, "end": 77.48, "text": " would follow what we hear."}, {"start": 77.48, "end": 82.2, "text": " Now that sounds like something straight out of a science fiction movie, perhaps even with"}, {"start": 82.2, "end": 87.48, "text": " today's advanced machine learning techniques, but let's see if it's possible."}, {"start": 87.48, "end": 92.52000000000001, "text": " This is a state of the art technique from last year that attempts to perform this."}, {"start": 92.52000000000001, "end": 95.8, "text": " Um, development in a very different history."}, {"start": 95.8, "end": 102.08, "text": " We signed this on the 56th anniversary of the illicit treaty of 1966."}, {"start": 102.08, "end": 107.48, "text": " Hmm, there are extraneous lip movements which are the remnants of the original video so"}, {"start": 107.48, "end": 114.16, "text": " much so that she seems to be giving two speeches at the same time, not too convincing."}, {"start": 114.16, "end": 117.0, "text": " So is this not possible to pull off?"}, {"start": 117.0, "end": 123.4, "text": " Well, now hold onto your papers and let's see how this new paper does at the same problem."}, {"start": 123.4, "end": 129.44, "text": " On Germany but also for a starting point of a very different development in a very different"}, {"start": 129.44, "end": 130.76, "text": " history."}, {"start": 130.76, "end": 135.52, "text": " We signed this on the 56th anniversary of the illicit treaty of 9th."}, {"start": 135.52, "end": 138.32000000000002, "text": " Wow, now that's significantly better."}, {"start": 138.32000000000002, "end": 143.0, "text": " The remnants of the previous speech are still there, but the footage is much, much more"}, {"start": 143.0, "end": 144.0, "text": " convincing."}, {"start": 144.0, "end": 149.12, "text": " What's even better is that the previous technique was published just one year ago by the same"}, {"start": 149.12, "end": 150.60000000000002, "text": " research group."}, {"start": 150.60000000000002, "end": 153.28, "text": " Such a great leap in just one year."}, {"start": 153.28, "end": 154.76000000000002, "text": " My goodness."}, {"start": 154.76000000000002, "end": 157.16000000000003, "text": " So apparently this is possible."}, {"start": 157.16000000000003, "end": 160.0, "text": " But I would like to see another example just to make sure."}, {"start": 160.0, "end": 165.52, "text": " That's what he'd like to on the first issue, it's not for me to comment on the visit of"}, {"start": 165.52, "end": 166.52, "text": " Madame."}, {"start": 166.52, "end": 167.52, "text": " Checkmark."}, {"start": 167.52, "end": 174.0, "text": " So far this is an amazing leap, but believe it or not, this is just one of the easier applications"}, {"start": 174.0, "end": 175.32, "text": " of the new model."}, {"start": 175.32, "end": 178.24, "text": " So let's see what acid can do."}, {"start": 178.24, "end": 183.12, "text": " For instance, many of us are sitting at home yearning for some learning materials, but"}, {"start": 183.12, "end": 187.64, "text": " the vast majority of these were recorded in only one language."}, {"start": 187.64, "end": 192.11999999999998, "text": " Before if we could read up famous lectures into many other languages."}, {"start": 192.11999999999998, "end": 197.35999999999999, "text": " The computer vision is moving on the basis of the deep learning very quickly."}, {"start": 197.35999999999999, "end": 202.55999999999997, "text": " For example, we use the same color as the car now to make a lot of things."}, {"start": 202.55999999999997, "end": 203.88, "text": " Look at that."}, {"start": 203.88, "end": 209.39999999999998, "text": " Any lecture could be available in any language and look as if they were originally recorded"}, {"start": 209.39999999999998, "end": 214.6, "text": " in these foreign languages as long as someone says the words, which can also be kind of"}, {"start": 214.6, "end": 218.2, "text": " automated through speech synthesis these days."}, {"start": 218.2, "end": 224.16, "text": " So, it clearly works on real characters, but are you thinking what I am thinking?"}, {"start": 224.16, "end": 228.32, "text": " Three, what about lip syncing animated characters?"}, {"start": 228.32, "end": 233.76, "text": " Imagine if a line has to be changed in a Disney movie, can we synthesize new video footage"}, {"start": 233.76, "end": 237.79999999999998, "text": " without calling in the animators for yet another all-nighter?"}, {"start": 237.79999999999998, "end": 239.79999999999998, "text": " Let's give it a try."}, {"start": 239.79999999999998, "end": 241.16, "text": " I'll go around."}, {"start": 241.16, "end": 242.56, "text": " Wait for my call."}, {"start": 242.56, "end": 245.56, "text": " Indeed we can, loving it."}, {"start": 245.56, "end": 247.04, "text": " Let's do one more."}, {"start": 247.04, "end": 250.84, "text": " Four, of course we have a lot of these meme gifts on the internet."}, {"start": 250.84, "end": 255.36, "text": " What about redubbing those with an arbitrary line of our choice?"}, {"start": 255.36, "end": 259.16, "text": " Yup, that is indeed also possible."}, {"start": 259.16, "end": 260.84000000000003, "text": " Well done."}, {"start": 260.84000000000003, "end": 265.92, "text": " And imagine that this is such a leap, just one more work down the line from the 2019"}, {"start": 265.92, "end": 271.44, "text": " paper, I can only imagine what results we will see one more paper down the line."}, {"start": 271.44, "end": 277.12, "text": " It not only does what it does better, but it can also be applied to a multitude of problems."}, {"start": 277.12, "end": 279.0, "text": " What a time to be alive."}, {"start": 279.0, "end": 283.72, "text": " When we look under the hood, we see that the two key components that enable this wizardry"}, {"start": 283.72, "end": 285.88, "text": " are here and here."}, {"start": 285.88, "end": 288.28, "text": " So what does this mean exactly?"}, {"start": 288.28, "end": 293.48, "text": " It means that we jointly improve the quality of the lip syncing and the visual quality of"}, {"start": 293.48, "end": 294.48, "text": " the video."}, {"start": 294.48, "end": 299.96, "text": " These two modules curate the results offered by the main generator neural network and reject"}, {"start": 299.96, "end": 305.52, "text": " solutions that don't have enough detail or don't match the speech that we hear and thereby"}, {"start": 305.52, "end": 309.15999999999997, "text": " they steer it towards much higher quality solutions."}, {"start": 309.15999999999997, "end": 314.2, "text": " If we continue this training process for 29 hours for the lip syncing discriminator, we"}, {"start": 314.2, "end": 316.59999999999997, "text": " get these incredible results."}, {"start": 316.59999999999997, "end": 323.0, "text": " Now let's have a quick look at the user study and humans appear to almost never prefer the"}, {"start": 323.0, "end": 325.76, "text": " older method compared to this one."}, {"start": 325.76, "end": 327.12, "text": " I tend to agree."}, {"start": 327.12, "end": 331.12, "text": " If you consider these forgeries to be defects, then there you go."}, {"start": 331.12, "end": 336.4, "text": " Useful defects that can potentially help people around the world stranded at home to study"}, {"start": 336.4, "end": 338.24, "text": " and improve themselves."}, {"start": 338.24, "end": 340.64, "text": " Imagine what good this could do."}, {"start": 340.64, "end": 342.28000000000003, "text": " Well done."}, {"start": 342.28000000000003, "end": 345.76, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 345.76, "end": 351.72, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 351.72, "end": 359.64000000000004, "text": " They recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your"}, {"start": 359.64000000000004, "end": 366.12, "text": " papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 366.12, "end": 371.6, "text": " Plus they are the only Cloud service with 48GB RTX 8000."}, {"start": 371.6, "end": 377.92, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances"}, {"start": 377.92, "end": 379.8, "text": " workstations or servers."}, {"start": 379.8, "end": 385.8, "text": " Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 385.8, "end": 386.8, "text": " today."}, {"start": 386.8, "end": 391.28000000000003, "text": " Our thanks to Lambda for their longstanding support and for helping us make better videos"}, {"start": 391.28000000000003, "end": 392.28000000000003, "text": " for you."}, {"start": 392.28, "end": 419.03999999999996, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Zj1N4uE1ehk
Colorizing Fruits is Hard…Why? 🍓
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their report on this exact paper is available here: https://wandb.ai/wandb/instacolorization/reports/Overview-Instance-Aware-Image-Colorization---VmlldzoyOTk3MDI 📝 The paper "Instance-aware Image Colorization" is available here: https://ericsujw.github.io/InstColorization/ User study results: https://cgv.cs.nthu.edu.tw/InstColorization_data/Supplemental_Material/user_study_result.html DeOldify: https://github.com/jantic/DeOldify Follow them on Twitter for more! - https://twitter.com/deoldify 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-3359755/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to talk about image colorization. This is a problem where we take an old, old black and white photo, run it through one of these amazing new learning-based algorithms, and out comes an image that is properly colored. The obvious application of this is, of course, image restoration. What you see here is some amazing results with this new method, so clearly this can be done very well and you will find out in a moment how challenging this problem is. But there are also less obvious applications, for instance, image compression too. Just imagine that we wouldn't have to transmit colored images over the internet, black and white photos would be fine, and if there is an AIB's algorithm in every computer, they would be able to restore the colors perfectly. This would save a lot of bandwidth and energy, and it would be a glorious thing to do. So how well do the current learning-based algorithms do at image colorization? Wow, that is a hard, hard question, and let's find out together why. First, let's have a look at the results of this new method that you saw so far against previously existing techniques. Here is the black and white input image, and here is the ground truth output that we have concealed from the algorithms. Only we see the result, this will remain our secret, and the algorithms only have access to the black and white input. Let's start. Yep, these are all learning-based methods, so they all appear to know what a strawberry is, and they color it accordingly. So far, so good. However, this problem is much, much harder than just colorizing strawberries. We have grapes too. That does not sound like much, but there are many problems with grapes. For instance, there are many kinds of grapes, and they are also translucent, and therefore their appearance depends way more on the lighting of the scene. That's a problem, because the algorithm not only has to know what objects are present in this image, but what the lighting around them looks like, and how their material properties should interact with this kind of lighting. Goodness, that is a tremendously difficult problem. Previous methods did a reasonably good job at colorizing the image, but the grapes remain a challenge. And now, hold on to your papers, and let's see the new method. Wow, just look at that. The translucency of the grapes is captured beautifully, and the colors are very close to the ground truth. But that was just one example. What if we generate a lot of results and show them to a few people? What do the users say? Let's compare the Deoldify technique with this new method. Deoldify is an interesting specimen, as it combines three previously published papers really well, but it is not a published paper. Fortunately, its source code is available, and it is easy to compare results against it. So, let's see how it fares against this new technique. Apparently, they trade blows, but more often than not, the new method seems to come out ahead. Now note that we have a secret, and that secret is that we can see the reference images, which are hidden from the algorithms. This helps us a great deal, because we can mathematically compare the results of the learning algorithms to the reference. So, what do the numbers say? Yep, now you see why I said that it is very hard to tell which algorithm is the best, because they all trade blows, and depending on how we measure the difference between two images, we get a different winner. All this is true, until we have a look at the results with this new paper. If you have been holding on to your papers, now squeeze that paper, because this new method smokes a competition on every data set, regardless of what we are measuring. So, what is this black magic? What is behind this wizardry? This method uses an of the shelf object detection module that takes the interesting elements out of the image, which are then colorized one by one. Of course, this way we haven't colorized the entire image, so parts of it would remain black and white, which is clearly not what we are looking for. As a remedy, let's color the entire image independently from the previous process. Now, we have an OK quality result, where the colors have reached the entirety of the image, but now, what do we do? We also colorize the objects independently, so some things are colorized twice, and they are different. This doesn't make any sense whatsoever, until we introduce another fusion module to the process that stitches these overlapping results into one coherent output. And the results are absolutely amazing. So, how much do we have to wait to get all this? If we take a small-ish image, the colorization can take place five times a second, and just two more papers down the line, this will easily run in real time, I am sure. You know what? With the pace of progress in machine learning research today, maybe even just one more paper down the line. So, what about the limitations? You remember that this contains an object detector, and if it goes haywire, all bets are off, and we might have to revert to the only OK full image colorization method. Also, note that all of these comparisons showcase image colorization, while the oldify is also capable of colorizing videos as well. But of course, one step at a time. What a time to be alive! What you see here is a report of this exact paper we have talked about, which was made by weights and biases. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.nb.com slash papers, or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.5600000000000005, "end": 8.0, "text": " Today we are going to talk about image colorization."}, {"start": 8.0, "end": 11.76, "text": " This is a problem where we take an old, old black and white photo,"}, {"start": 11.76, "end": 15.6, "text": " run it through one of these amazing new learning-based algorithms,"}, {"start": 15.6, "end": 18.96, "text": " and out comes an image that is properly colored."}, {"start": 18.96, "end": 23.12, "text": " The obvious application of this is, of course, image restoration."}, {"start": 23.12, "end": 26.64, "text": " What you see here is some amazing results with this new method,"}, {"start": 26.64, "end": 31.12, "text": " so clearly this can be done very well and you will find out in a moment"}, {"start": 31.12, "end": 33.28, "text": " how challenging this problem is."}, {"start": 33.28, "end": 38.08, "text": " But there are also less obvious applications, for instance, image compression too."}, {"start": 38.72, "end": 43.6, "text": " Just imagine that we wouldn't have to transmit colored images over the internet,"}, {"start": 43.6, "end": 49.44, "text": " black and white photos would be fine, and if there is an AIB's algorithm in every computer,"}, {"start": 49.44, "end": 52.88, "text": " they would be able to restore the colors perfectly."}, {"start": 52.88, "end": 58.160000000000004, "text": " This would save a lot of bandwidth and energy, and it would be a glorious thing to do."}, {"start": 58.160000000000004, "end": 63.760000000000005, "text": " So how well do the current learning-based algorithms do at image colorization?"}, {"start": 63.760000000000005, "end": 69.12, "text": " Wow, that is a hard, hard question, and let's find out together why."}, {"start": 69.76, "end": 74.08, "text": " First, let's have a look at the results of this new method that you saw so far"}, {"start": 74.08, "end": 76.56, "text": " against previously existing techniques."}, {"start": 76.56, "end": 80.4, "text": " Here is the black and white input image, and here is the ground truth output"}, {"start": 80.4, "end": 83.44000000000001, "text": " that we have concealed from the algorithms."}, {"start": 83.44000000000001, "end": 86.4, "text": " Only we see the result, this will remain our secret,"}, {"start": 86.4, "end": 89.60000000000001, "text": " and the algorithms only have access to the black and white input."}, {"start": 90.32000000000001, "end": 90.88000000000001, "text": " Let's start."}, {"start": 92.56, "end": 98.08000000000001, "text": " Yep, these are all learning-based methods, so they all appear to know what a strawberry is,"}, {"start": 98.08000000000001, "end": 100.08000000000001, "text": " and they color it accordingly."}, {"start": 100.64000000000001, "end": 102.16000000000001, "text": " So far, so good."}, {"start": 102.16000000000001, "end": 106.96000000000001, "text": " However, this problem is much, much harder than just colorizing strawberries."}, {"start": 106.96, "end": 112.88, "text": " We have grapes too. That does not sound like much, but there are many problems with grapes."}, {"start": 112.88, "end": 117.19999999999999, "text": " For instance, there are many kinds of grapes, and they are also translucent,"}, {"start": 117.19999999999999, "end": 122.08, "text": " and therefore their appearance depends way more on the lighting of the scene."}, {"start": 122.08, "end": 128.64, "text": " That's a problem, because the algorithm not only has to know what objects are present in this image,"}, {"start": 128.64, "end": 132.72, "text": " but what the lighting around them looks like, and how their material properties"}, {"start": 132.72, "end": 135.04, "text": " should interact with this kind of lighting."}, {"start": 135.04, "end": 138.72, "text": " Goodness, that is a tremendously difficult problem."}, {"start": 138.72, "end": 142.95999999999998, "text": " Previous methods did a reasonably good job at colorizing the image,"}, {"start": 142.95999999999998, "end": 144.79999999999998, "text": " but the grapes remain a challenge."}, {"start": 144.79999999999998, "end": 149.2, "text": " And now, hold on to your papers, and let's see the new method."}, {"start": 149.2, "end": 152.72, "text": " Wow, just look at that."}, {"start": 152.72, "end": 156.39999999999998, "text": " The translucency of the grapes is captured beautifully,"}, {"start": 156.39999999999998, "end": 159.04, "text": " and the colors are very close to the ground truth."}, {"start": 159.04, "end": 161.76, "text": " But that was just one example."}, {"start": 161.76, "end": 166.07999999999998, "text": " What if we generate a lot of results and show them to a few people?"}, {"start": 166.07999999999998, "end": 167.28, "text": " What do the users say?"}, {"start": 168.32, "end": 171.67999999999998, "text": " Let's compare the Deoldify technique with this new method."}, {"start": 171.67999999999998, "end": 178.32, "text": " Deoldify is an interesting specimen, as it combines three previously published papers really well,"}, {"start": 178.32, "end": 180.64, "text": " but it is not a published paper."}, {"start": 180.64, "end": 185.76, "text": " Fortunately, its source code is available, and it is easy to compare results against it."}, {"start": 185.76, "end": 189.35999999999999, "text": " So, let's see how it fares against this new technique."}, {"start": 189.36, "end": 192.56, "text": " Apparently, they trade blows, but more often than not,"}, {"start": 192.56, "end": 194.72000000000003, "text": " the new method seems to come out ahead."}, {"start": 195.44000000000003, "end": 200.96, "text": " Now note that we have a secret, and that secret is that we can see the reference images,"}, {"start": 200.96, "end": 203.12, "text": " which are hidden from the algorithms."}, {"start": 203.12, "end": 209.12, "text": " This helps us a great deal, because we can mathematically compare the results of the learning algorithms"}, {"start": 209.12, "end": 210.32000000000002, "text": " to the reference."}, {"start": 210.32000000000002, "end": 212.4, "text": " So, what do the numbers say?"}, {"start": 213.12, "end": 218.32000000000002, "text": " Yep, now you see why I said that it is very hard to tell which algorithm is the best,"}, {"start": 218.32, "end": 224.07999999999998, "text": " because they all trade blows, and depending on how we measure the difference between two images,"}, {"start": 224.07999999999998, "end": 226.07999999999998, "text": " we get a different winner."}, {"start": 226.07999999999998, "end": 230.79999999999998, "text": " All this is true, until we have a look at the results with this new paper."}, {"start": 231.35999999999999, "end": 235.35999999999999, "text": " If you have been holding on to your papers, now squeeze that paper,"}, {"start": 235.35999999999999, "end": 241.12, "text": " because this new method smokes a competition on every data set, regardless of what we are measuring."}, {"start": 242.32, "end": 244.0, "text": " So, what is this black magic?"}, {"start": 244.56, "end": 246.64, "text": " What is behind this wizardry?"}, {"start": 246.64, "end": 252.48, "text": " This method uses an of the shelf object detection module that takes the interesting elements out of"}, {"start": 252.48, "end": 255.92, "text": " the image, which are then colorized one by one."}, {"start": 255.92, "end": 261.28, "text": " Of course, this way we haven't colorized the entire image, so parts of it would remain black and"}, {"start": 261.28, "end": 263.91999999999996, "text": " white, which is clearly not what we are looking for."}, {"start": 264.64, "end": 269.68, "text": " As a remedy, let's color the entire image independently from the previous process."}, {"start": 270.4, "end": 276.08, "text": " Now, we have an OK quality result, where the colors have reached the entirety of the image,"}, {"start": 276.08, "end": 277.68, "text": " but now, what do we do?"}, {"start": 278.08, "end": 283.12, "text": " We also colorize the objects independently, so some things are colorized twice,"}, {"start": 283.12, "end": 284.71999999999997, "text": " and they are different."}, {"start": 284.71999999999997, "end": 287.28, "text": " This doesn't make any sense whatsoever,"}, {"start": 287.91999999999996, "end": 294.08, "text": " until we introduce another fusion module to the process that stitches these overlapping results"}, {"start": 294.08, "end": 296.15999999999997, "text": " into one coherent output."}, {"start": 296.15999999999997, "end": 298.4, "text": " And the results are absolutely amazing."}, {"start": 298.96, "end": 302.0, "text": " So, how much do we have to wait to get all this?"}, {"start": 302.0, "end": 307.44, "text": " If we take a small-ish image, the colorization can take place five times a second,"}, {"start": 307.44, "end": 312.64, "text": " and just two more papers down the line, this will easily run in real time, I am sure."}, {"start": 312.64, "end": 313.44, "text": " You know what?"}, {"start": 313.44, "end": 316.64, "text": " With the pace of progress in machine learning research today,"}, {"start": 316.64, "end": 319.44, "text": " maybe even just one more paper down the line."}, {"start": 320.0, "end": 322.16, "text": " So, what about the limitations?"}, {"start": 322.16, "end": 326.72, "text": " You remember that this contains an object detector, and if it goes haywire,"}, {"start": 326.72, "end": 333.52000000000004, "text": " all bets are off, and we might have to revert to the only OK full image colorization method."}, {"start": 333.52000000000004, "end": 338.24, "text": " Also, note that all of these comparisons showcase image colorization,"}, {"start": 338.24, "end": 342.96000000000004, "text": " while the oldify is also capable of colorizing videos as well."}, {"start": 342.96000000000004, "end": 345.44000000000005, "text": " But of course, one step at a time."}, {"start": 345.44000000000005, "end": 347.12, "text": " What a time to be alive!"}, {"start": 347.12, "end": 351.20000000000005, "text": " What you see here is a report of this exact paper we have talked about,"}, {"start": 351.20000000000005, "end": 353.52000000000004, "text": " which was made by weights and biases."}, {"start": 353.52, "end": 358.47999999999996, "text": " I put a link to it in the description, make sure to have a look, I think it helps you understand"}, {"start": 358.47999999999996, "end": 360.0, "text": " this paper better."}, {"start": 360.0, "end": 364.71999999999997, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 364.71999999999997, "end": 368.4, "text": " Their system is designed to save you a ton of time and money,"}, {"start": 368.4, "end": 373.28, "text": " and it is actively used in projects at prestigious labs, such as OpenAI,"}, {"start": 373.28, "end": 376.24, "text": " Toyota Research, GitHub, and more."}, {"start": 376.24, "end": 378.88, "text": " And the best part is that if you have an open source,"}, {"start": 378.88, "end": 383.36, "text": " academic, or personal project, you can use their tools for free."}, {"start": 383.36, "end": 385.84, "text": " It really is as good as it gets."}, {"start": 385.84, "end": 389.84, "text": " Make sure to visit them through www.nb.com slash papers,"}, {"start": 389.84, "end": 395.36, "text": " or click the link in the video description to start tracking your experiments in five minutes."}, {"start": 395.36, "end": 398.56, "text": " Our thanks to weights and biases for their long-standing support,"}, {"start": 398.56, "end": 401.28, "text": " and for helping us make better videos for you."}, {"start": 401.28, "end": 412.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Jy_VZQnZqGk
This AI Creates A 3D Model of You! 🚶‍♀️
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/stacey/deep-drive/reports/Image-Masks-for-Semantic-Segmentation--Vmlldzo4MTUwMw 📝 The paper "PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization" is available here: https://shunsukesaito.github.io/PIFuHD/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
De haircutotokim ke sociallye Overf22-ban egyszerűen edizzé van lehet pelökszbelül, és én больше de te szerelhetik az FT30 ára volt így, és kiverszekre helyen velem deniedig. Szíteni, hogyha boyrát encompasszt justo nincs a muzá 60 tutulci lime,mondom a period A polson is kerülni, extendingiér részni el幹嘛, Menni a vallóğlu 짴�ítyvel éjűvel, felind ezekzköntjenli és angadása a utá everything, de már hogyan ez azύ ilyen engelyizet ezek been állánban fériak. Öconre拿z industráthat科ám. Ha egy Miatt snowball Отni, Mongosan benefitsazt szálasz importantbó religion darkató ha secondsjínű csináltálok viszőtt úgyelférme és lesz verdadásósnubban a csapélt a napadógnak a v saltta. Sonya BER Nézzel, karr TheCast De ce omnésre is, egyre vecz Satanibilities volt az conquoluól talán, ez kivár月 important THOM Wandig lowraexhalesi gejus, és belทüttületi, their Katherine-d Selur biztosba, de illet내 estirakkéndi ez bon 9. Ne snekezd先 unboxan im cables volt az, hogy én ha patizsi állt szoléri meg maga, és halos panber gészélenek nem lefítani. S exposure illetés listenben nem megszologistson, Básját el ez Van akar agasps új acreszt ücent a csevet fel.çozont az őtsszelehet fogani körepozni kicsi, az Outroban is a ré atékeléde része próbálok, hogy behasizerszett semmi a 800, de nem apresent usted halb wiredok stall томál Third compress doeszt valamort sabaz easieráj elé shelfárabad és a docommelyször iPodoklon meg interactive erfüldül ha a fいますiveszont wird egy egész kitit interestedsec a meomingsek a lesz visszagy, A Dragon800 działan baszt規te dolgó, was van-avatna viss proportions, és az ölog- Vávoljok mag涅cséter a m little uram perok kannola jár Auto4A napART-e munt ashog a maximára is, draggingó人re de�edétgemit dolgnalon, hogy Majd meg a lenne se lesz. Re taki bagszá leutatva az, meg novelty hidden visual.類ása kaholpl word, a<|de|><|transcribe|> sére jоны kundert édme swordsmanopulav pressának, majd puede den leszár meg mault. Ez a mault marad tenerドlebben útzenek szü Valerie 2021 способimpression recognized ak裃baamtok. végétmi 400-os érlvésztét, felég Robot, de az időkől eség unikorba�uns signature pimőba terénn Hongert sészenki務lásban, nem sudd hosted Lyn ju goodbye, ezeken hív Tingajom a<|fa|><|translate|>, nem ki mít- Intelligence Jvének Ózban, legétsésem Karrakom éreltő fißkül belettem, de gondol она catászéks slateek. Round 16.athol zse Ed视os lóáánból ide az elég innovativesolomet. Tehől reagálják minden affot, mi sazzába létalába én leghajadomDaSz ahoraban, meg nevegy tőd descendedod. Azan hogy ilyen átor az a fegykéssel általánál adt kéne muchísimo újквubb az önéleménye is, a sólo butjáеннаяbiIL� s civilization-nak, teh overlaSomethingi söd connection mobile. és de limpzelnyukában ő� regulályos bere àj짝án amennづround! Gobben is van οly Store Pěreu nyistem.. Vagy már.
[{"start": 0.0, "end": 27.44, "text": " De haircutotokim ke sociallye"}, {"start": 27.44, "end": 29.86, "text": " Overf22-ban egyszer\u0171en edizz\u00e9 van lehet pel\u00f6kszbel\u00fcl, \u00e9s \u00e9n \u0431\u043e\u043b\u044c\u0448\u0435 de te szerelhetik az"}, {"start": 29.86, "end": 32.94, "text": " FT30 \u00e1ra volt \u00edgy, \u00e9s kiverszekre helyen velem deniedig."}, {"start": 32.94, "end": 37.2, "text": " Sz\u00edteni, hogyha boyr\u00e1t encompasszt justo nincs a muz\u00e1 60 tutulci lime,"}, {"start": 37.2, "end": 51.28, "text": "mondom a period"}, {"start": 51.28, "end": 54.68, "text": " A polson is ker\u00fclni, extendingi\u00e9r r\u00e9szni el\u5e79\u561b,"}, {"start": 54.800000000000004, "end": 57.24, "text": " Menni a vall\u00f3\u011flu \uc9f4\ufffd\u00edtyvel \u00e9j\u0171vel,"}, {"start": 57.42, "end": 59.04, "text": " felind ezekzk\u00f6ntjenli \u00e9s angad\u00e1sa a ut\u00e1 everything,"}, {"start": 59.18, "end": 62.88, "text": " de m\u00e1r hogyan ez az\u03cd ilyen engelyizet ezek been \u00e1ll\u00e1nban f\u00e9riak."}, {"start": 62.980000000000004, "end": 65.62, "text": " \u00d6conre\u62ffz industr\u00e1that\u79d1\u00e1m."}, {"start": 65.72, "end": 69.02000000000001, "text": " Ha egy Miatt snowball \u041e\u0442ni,"}, {"start": 69.12, "end": 73.06, "text": " Mongosan benefitsazt sz\u00e1lasz importantb\u00f3 religion darkat\u00f3 ha secondsj\u00edn\u0171"}, {"start": 73.22, "end": 76.68, "text": " csin\u00e1lt\u00e1lok visz\u0151tt \u00fagyelf\u00e9rme"}, {"start": 76.82, "end": 80.5, "text": " \u00e9s lesz verdad\u00e1s\u00f3snubban a csap\u00e9lt a napad\u00f3gnak a v saltta."}, {"start": 80.5, "end": 83.2, "text": " Sonya BER"}, {"start": 83.88, "end": 92.76, "text": " N\u00e9zzel, karr TheCast"}, {"start": 93.24, "end": 104.52, "text": " De ce omn\u00e9sre is, egyre vecz"}, {"start": 104.52, "end": 111.3, "text": " Satanibilities volt az conquolu\u00f3l tal\u00e1n,"}, {"start": 111.3, "end": 115.34, "text": " ez kiv\u00e1r\u6708 important THOM Wandig lowraexhalesi gejus,"}, {"start": 115.34, "end": 118.92, "text": " \u00e9s bel\u0e17\u00fctt\u00fcleti, their Katherine-d Selur biztosba,"}, {"start": 118.92, "end": 119.74, "text": " de illet\ub0b4 estirakk\u00e9ndi ez bon 9."}, {"start": 119.74, "end": 123.56, "text": " Ne snekezd\u5148 unboxan im cables volt az,"}, {"start": 123.56, "end": 125.44, "text": " hogy \u00e9n ha patizsi \u00e1llt szol\u00e9ri meg maga,"}, {"start": 125.44, "end": 129.07999999999998, "text": " \u00e9s halos panber g\u00e9sz\u00e9lenek nem lef\u00edtani."}, {"start": 129.07999999999998, "end": 132.6, "text": " S exposure illet\u00e9s listenben nem megszologistson,"}, {"start": 132.6, "end": 136.01999999999998, "text": " B\u00e1sj\u00e1t el ez Van akar agasps \u00faj acreszt \u00fccent a csevet fel."}, {"start": 136.5, "end": 139.5, "text": "\u00e7ozont az \u0151tsszelehet fogani k\u00f6repozni kicsi,"}, {"start": 139.6, "end": 144.98, "text": " az Outroban is a r\u00e9 at\u00e9kel\u00e9de r\u00e9sze pr\u00f3b\u00e1lok,"}, {"start": 145.07999999999998, "end": 147.84, "text": " hogy behasizerszett semmi a 800,"}, {"start": 147.98, "end": 150.1, "text": " de nem apresent usted halb wiredok stall \u0442\u043e\u043c\u00e1l Third compress"}, {"start": 150.2, "end": 154.94, "text": " doeszt valamort sabaz easier\u00e1j el\u00e9 shelf\u00e1rabad \u00e9s"}, {"start": 155.04, "end": 158.18, "text": " a docommelysz\u00f6r iPodoklon meg interactive"}, {"start": 158.28, "end": 158.62, "text": " erf\u00fcld\u00fcl ha a f\u3044\u307e\u3059iveszont wird egy eg\u00e9sz"}, {"start": 158.72, "end": 160.42, "text": " kitit interestedsec a meomingsek"}, {"start": 160.51999999999998, "end": 161.72, "text": " a lesz visszagy,"}, {"start": 161.72, "end": 185.28, "text": " A Dragon800 dzia\u0142an baszt\u898fte dolg\u00f3, was van-avatna viss proportions, \u00e9s az \u00f6log-"}, {"start": 185.28, "end": 189.44, "text": " V\u00e1voljok mag\u6d85cs\u00e9ter a m little uram perok kannola j\u00e1r Auto4A napART-e munt ashog a maxim\u00e1ra is,"}, {"start": 189.44, "end": 190.12, "text": " dragging\u00f3\u4ebare de\ufffded\u00e9tgemit dolgnalon,"}, {"start": 190.12, "end": 194.16, "text": " hogy Majd meg a lenne se lesz."}, {"start": 197.84, "end": 200.54, "text": " Re taki bagsz\u00e1 leutatva az,"}, {"start": 200.54, "end": 200.36, "text": " meg novelty hidden visual."}, {"start": 200.16, "end": 202.66, "text": "\u985e\u00e1sa kaholpl word,"}, {"start": 202.66, "end": 185.28, "text": " a"}, {"start": 186.42, "end": 203.64, "text": " s\u00e9re j\u043e\u043d\u044b kundert \u00e9dme swordsmanopulav press\u00e1nak,"}, {"start": 203.46, "end": 205.64, "text": " majd puede den lesz\u00e1r meg mault."}, {"start": 207.12, "end": 211.24, "text": " Ez a mault marad tener\u30c9lebben \u00fatzenek sz\u00fc Valerie"}, {"start": 209.68, "end": 187.04, "text": " 2021 \u0441\u043f\u043e\u0441\u043e\u0431impression recognized"}, {"start": 212.68, "end": 212.6, "text": " ak\u88c3baamtok."}, {"start": 212.6, "end": 215.29999999999998, "text": " v\u00e9g\u00e9tmi 400-os \u00e9rlv\u00e9szt\u00e9t,"}, {"start": 215.42, "end": 217.35999999999999, "text": " fel\u00e9g Robot,"}, {"start": 217.48, "end": 221.85999999999999, "text": " de az id\u0151k\u0151l es\u00e9g unikorba\ufffduns signature pim\u0151ba ter\u00e9nn Hongert s\u00e9szenki\u52d9l\u00e1sban,"}, {"start": 222.0, "end": 225.04, "text": " nem sudd hosted Lyn ju goodbye,"}, {"start": 225.16, "end": 223.32, "text": " ezeken h\u00edv Tingajom a"}, {"start": 214.66, "end": 234.32, "text": ", nem ki m\u00edt- Intelligence Jv\u00e9nek \u00d3zban,"}, {"start": 233.44, "end": 236.4, "text": " leg\u00e9ts\u00e9sem Karrakom \u00e9relt\u0151 fi\u00dfk\u00fcl belettem,"}, {"start": 236.5, "end": 240.2, "text": " de gondol \u043e\u043d\u0430 cat\u00e1sz\u00e9ks slateek."}, {"start": 231.45999999999998, "end": 242.24, "text": " Round 16."}, {"start": 242.24, "end": 245.64000000000001, "text": "athol zse Ed\u89c6os l\u00f3\u00e1\u00e1nb\u00f3l ide az el\u00e9g innovativesolomet."}, {"start": 245.62, "end": 250.9, "text": " Teh\u0151l reag\u00e1lj\u00e1k minden affot, mi sazz\u00e1ba l\u00e9tal\u00e1ba \u00e9n leghajadomDaSz ahoraban,"}, {"start": 250.94, "end": 253.4, "text": " meg nevegy t\u0151d descendedod."}, {"start": 254.54000000000002, "end": 262.08, "text": " Azan"}, {"start": 265.16, "end": 269.82, "text": " hogy ilyen \u00e1tor az a fegyk\u00e9ssel \u00e1ltal\u00e1n\u00e1l adt k\u00e9ne much\u00edsimo \u00faj\u043a\u0432ubb az \u00f6n\u00e9lem\u00e9nye is,"}, {"start": 269.82, "end": 294.4, "text": " a s\u00f3lo butj\u00e1\u0435\u043d\u043d\u0430\u044fbiIL\ufffd s civilization-nak, teh overlaSomethingi s\u00f6d connection mobile."}, {"start": 294.4, "end": 298.29999999999995, "text": " \u00e9s de limpzelnyuk\u00e1ban \u0151\ufffd regul\u00e1lyos bere \u00e0j\uc9dd\u00e1n amenn\u3065round!"}, {"start": 298.29999999999995, "end": 301.5, "text": " Gobben is van \u03bfly Store P\u011breu nyistem.."}, {"start": 301.5, "end": 303.5, "text": " Vagy m\u00e1r."}]
Two Minute Papers
https://www.youtube.com/watch?v=ooZ9rUYOFI4
Simulating Dragons Under Cloth Sheets! 🐲
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/stacey/droughtwatch/reports/Drought-Watch-Benchmark-Progress--Vmlldzo3ODQ3OQ 📝 The paper "Local Optimization for Robust Signed Distance Field Collision" is available here: https://mmacklin.com/sdfcontact.pdf 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. Today, we are going to see a lot of physics simulations with many many collisions. In particular, you will see a lot of beautiful footage which contains contact between thin shells and rigid bodies. In this simulation program, at least one of these objects will always be represented as a signed distance field. This representation is useful because it helps us rapidly compute whether something is inside or outside of this object. However, a collision here takes two objects, of course, so the other object will be represented as a triangle mesh, which is perhaps the most common way of storing object geometry in computer graphics. However, with this, we have a problem. Sign distance fields were great, triangle meshes also were great for a number of applications, however, computing where they overlap when they collide is still slow and difficult. If that does not sound bad enough, it gets even worse than that. How so? Let's have a look together. Experiment number one. Let's try to intersect this cone with this rubbery sheet using a traditional technique, and there is only one rule no poking through is allowed. Well, guess what just happened? This earlier technique is called point sampling, and we either have to check too many points in the two geometries against each other, which takes way too long and still fails, or we skimp on some of them, but then this happens. Important contact points go missing. Not good. Let's see how this new method does with this case. Now that's what I'm talking about. No poking through anywhere to be seen, and let's have another look. Wait a second. Are you seeing what I am seeing? Look at this part. After the first collision, the contact points are moving ever so slightly, many, many times, and the new method is not missing any of them. Then things get a little out of hand, and it still works perfectly. Amazing. I can only imagine how many of these interactions the previous point sampling technique would miss, but we won't know because it has already failed long ago. Let's do another one. Experiment number two. Dragon versus cloth sheet. This is the previous point sampling method. We now see that it can find some of the interactions, but many others go missing, and due to the anomalies, we can't continue the animation by pulling the cloth sheet of the dragon because it is stuck. Let's see how the new method failed in this case. Oh yeah. Nothing pokes through, and therefore now we can continue the animation by doing this. Excellent. Experiment number three. Robocurtin. Point sampling. Oh no. This is a disaster. And now hold on to your papers and marvel at the new proposed method. Just look at how beautifully we can pull off the Robocurtin through this fine geometry. Loving it. So, of course, if you are a seasoned fellow scholar, you probably want to know how much computation do we have to do with the old and new methods. How much longer do I have to wait for the new improved technique? Let's have a look together. Each rigid shell here is a mesh that uses 129,000 triangles and the old point sampling method took 15 milliseconds to compute the collisions and this time it has done reasonably well. What about the new one? How much more computation do we have to perform to make sure that the simulations are robust? Please stop the video and make a guess. I'll wait. Alright, let's see and the new one does it in half a millisecond. Half a millisecond. It is not slower at all, quite the opposite, 30 times faster. My goodness. Huge congratulations on yet another masterpiece from scientists at NVIDIA and the University of Copenhagen. While we look at some more results in case you're wondering, the authors used NVIDIA's omniverse platform to create these amazing rendered worlds. And now, with this new method, we can infuse our physics simulation programs with the robust and blazing fast collision detector and I truly can't wait to see where talented artists will take these tools. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their system to visualize how well people are doing in the draw-thruarch benchmark. What's more, you can even try an example in an interactive notebook through the link in the video description. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir."}, {"start": 4.6000000000000005, "end": 10.0, "text": " Today, we are going to see a lot of physics simulations with many many collisions."}, {"start": 10.0, "end": 17.6, "text": " In particular, you will see a lot of beautiful footage which contains contact between thin shells and rigid bodies."}, {"start": 17.6, "end": 24.6, "text": " In this simulation program, at least one of these objects will always be represented as a signed distance field."}, {"start": 24.6, "end": 33.2, "text": " This representation is useful because it helps us rapidly compute whether something is inside or outside of this object."}, {"start": 33.2, "end": 41.0, "text": " However, a collision here takes two objects, of course, so the other object will be represented as a triangle mesh,"}, {"start": 41.0, "end": 46.8, "text": " which is perhaps the most common way of storing object geometry in computer graphics."}, {"start": 46.8, "end": 49.6, "text": " However, with this, we have a problem."}, {"start": 49.6, "end": 55.6, "text": " Sign distance fields were great, triangle meshes also were great for a number of applications,"}, {"start": 55.6, "end": 62.2, "text": " however, computing where they overlap when they collide is still slow and difficult."}, {"start": 62.2, "end": 66.2, "text": " If that does not sound bad enough, it gets even worse than that."}, {"start": 66.2, "end": 68.8, "text": " How so? Let's have a look together."}, {"start": 68.8, "end": 70.8, "text": " Experiment number one."}, {"start": 70.8, "end": 75.6, "text": " Let's try to intersect this cone with this rubbery sheet using a traditional technique,"}, {"start": 75.6, "end": 80.19999999999999, "text": " and there is only one rule no poking through is allowed."}, {"start": 80.19999999999999, "end": 82.6, "text": " Well, guess what just happened?"}, {"start": 82.6, "end": 89.8, "text": " This earlier technique is called point sampling, and we either have to check too many points in the two geometries against each other,"}, {"start": 89.8, "end": 97.6, "text": " which takes way too long and still fails, or we skimp on some of them, but then this happens."}, {"start": 97.6, "end": 102.0, "text": " Important contact points go missing. Not good."}, {"start": 102.0, "end": 106.4, "text": " Let's see how this new method does with this case."}, {"start": 106.4, "end": 108.4, "text": " Now that's what I'm talking about."}, {"start": 108.4, "end": 117.0, "text": " No poking through anywhere to be seen, and let's have another look."}, {"start": 117.0, "end": 122.4, "text": " Wait a second. Are you seeing what I am seeing? Look at this part."}, {"start": 122.4, "end": 128.6, "text": " After the first collision, the contact points are moving ever so slightly, many, many times,"}, {"start": 128.6, "end": 131.6, "text": " and the new method is not missing any of them."}, {"start": 131.6, "end": 136.79999999999998, "text": " Then things get a little out of hand, and it still works perfectly."}, {"start": 136.79999999999998, "end": 143.29999999999998, "text": " Amazing. I can only imagine how many of these interactions the previous point sampling technique would miss,"}, {"start": 143.29999999999998, "end": 148.0, "text": " but we won't know because it has already failed long ago."}, {"start": 148.0, "end": 151.2, "text": " Let's do another one. Experiment number two."}, {"start": 151.2, "end": 153.79999999999998, "text": " Dragon versus cloth sheet."}, {"start": 153.79999999999998, "end": 157.0, "text": " This is the previous point sampling method."}, {"start": 157.0, "end": 162.4, "text": " We now see that it can find some of the interactions, but many others go missing,"}, {"start": 162.4, "end": 170.0, "text": " and due to the anomalies, we can't continue the animation by pulling the cloth sheet of the dragon because it is stuck."}, {"start": 170.0, "end": 174.2, "text": " Let's see how the new method failed in this case."}, {"start": 174.2, "end": 184.8, "text": " Oh yeah. Nothing pokes through, and therefore now we can continue the animation by doing this."}, {"start": 184.8, "end": 186.0, "text": " Excellent."}, {"start": 186.0, "end": 191.0, "text": " Experiment number three. Robocurtin. Point sampling."}, {"start": 191.0, "end": 195.0, "text": " Oh no. This is a disaster."}, {"start": 195.0, "end": 199.8, "text": " And now hold on to your papers and marvel at the new proposed method."}, {"start": 199.8, "end": 205.0, "text": " Just look at how beautifully we can pull off the Robocurtin through this fine geometry."}, {"start": 205.0, "end": 206.2, "text": " Loving it."}, {"start": 206.2, "end": 210.8, "text": " So, of course, if you are a seasoned fellow scholar, you probably want to know"}, {"start": 210.8, "end": 215.2, "text": " how much computation do we have to do with the old and new methods."}, {"start": 215.2, "end": 219.0, "text": " How much longer do I have to wait for the new improved technique?"}, {"start": 219.0, "end": 220.6, "text": " Let's have a look together."}, {"start": 220.6, "end": 226.6, "text": " Each rigid shell here is a mesh that uses 129,000 triangles"}, {"start": 226.6, "end": 231.79999999999998, "text": " and the old point sampling method took 15 milliseconds to compute the collisions"}, {"start": 231.79999999999998, "end": 235.0, "text": " and this time it has done reasonably well."}, {"start": 235.0, "end": 236.39999999999998, "text": " What about the new one?"}, {"start": 236.39999999999998, "end": 242.2, "text": " How much more computation do we have to perform to make sure that the simulations are robust?"}, {"start": 242.2, "end": 245.2, "text": " Please stop the video and make a guess."}, {"start": 245.2, "end": 246.39999999999998, "text": " I'll wait."}, {"start": 246.39999999999998, "end": 251.79999999999998, "text": " Alright, let's see and the new one does it in half a millisecond."}, {"start": 251.79999999999998, "end": 254.0, "text": " Half a millisecond."}, {"start": 254.0, "end": 258.59999999999997, "text": " It is not slower at all, quite the opposite, 30 times faster."}, {"start": 258.59999999999997, "end": 260.0, "text": " My goodness."}, {"start": 260.0, "end": 264.8, "text": " Huge congratulations on yet another masterpiece from scientists at NVIDIA"}, {"start": 264.8, "end": 267.3, "text": " and the University of Copenhagen."}, {"start": 267.3, "end": 270.09999999999997, "text": " While we look at some more results in case you're wondering,"}, {"start": 270.1, "end": 275.5, "text": " the authors used NVIDIA's omniverse platform to create these amazing rendered worlds."}, {"start": 275.5, "end": 280.0, "text": " And now, with this new method, we can infuse our physics simulation programs"}, {"start": 280.0, "end": 283.70000000000005, "text": " with the robust and blazing fast collision detector"}, {"start": 283.70000000000005, "end": 288.20000000000005, "text": " and I truly can't wait to see where talented artists will take these tools."}, {"start": 288.20000000000005, "end": 290.20000000000005, "text": " What a time to be alive."}, {"start": 290.20000000000005, "end": 293.5, "text": " This episode has been supported by weights and biases."}, {"start": 293.5, "end": 297.20000000000005, "text": " In this post, they show you how to use their system to visualize"}, {"start": 297.2, "end": 300.59999999999997, "text": " how well people are doing in the draw-thruarch benchmark."}, {"start": 300.59999999999997, "end": 304.5, "text": " What's more, you can even try an example in an interactive notebook"}, {"start": 304.5, "end": 306.5, "text": " through the link in the video description."}, {"start": 306.5, "end": 309.7, "text": " Weight and biases provides tools to track your experiments"}, {"start": 309.7, "end": 311.3, "text": " in your deep learning projects."}, {"start": 311.3, "end": 315.0, "text": " Their system is designed to save you a ton of time and money"}, {"start": 315.0, "end": 318.3, "text": " and it is actively used in projects at prestigious labs"}, {"start": 318.3, "end": 322.8, "text": " such as OpenAI, Toyota Research, GitHub and more."}, {"start": 322.8, "end": 325.4, "text": " And the best part is that if you have an open source,"}, {"start": 325.4, "end": 329.79999999999995, "text": " academic or personal project, you can use their tools for free."}, {"start": 329.79999999999995, "end": 332.29999999999995, "text": " It really is as good as it gets."}, {"start": 332.29999999999995, "end": 336.29999999999995, "text": " Make sure to visit them through wnb.com slash papers"}, {"start": 336.29999999999995, "end": 341.79999999999995, "text": " or click the link in the video description to start tracking your experiments in five minutes."}, {"start": 341.79999999999995, "end": 345.0, "text": " Our thanks to weights and biases for their long-standing support"}, {"start": 345.0, "end": 347.7, "text": " and for helping us make better videos for you."}, {"start": 347.7, "end": 349.9, "text": " Thanks for watching and for your generous support"}, {"start": 349.9, "end": 355.9, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=F0QwAhUnpr4
Finally, Deformation Simulation... in Real Time! 🚗
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their report about a previous paper is available here: https://app.wandb.ai/stacey/stargan/reports/Cute-Animals-and-Post-Modern-Style-Transfer%3A-Stargan-V2-for-Multi-Domain-Image-Synthesis---VmlldzoxNzcwODQ 📝 The paper "Detailed Rigid Body Simulation with Extended Position Based Dynamics" is available here: - Paper: https://matthias-research.github.io/pages/publications/PBDBodies.pdf - Talk video: https://www.youtube.com/watch?v=zzy6u1z_l9A&feature=youtu.be Wish to see and hear the sound synthesis paper? - Our video: https://www.youtube.com/watch?v=rskdLEl05KI - Paper: https://research.cs.cornell.edu/Sound/mc/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Neyfahir. Today, with the power of computer graphics research, we can use our computers to run fluid simulations, simulate, immersing a selection of objects into jelly, or tear meat in a way that much like in reality, it tears along the muscle fibers. If we look at the abstract of this amazing new paper, we see this quoting. This allows us to trace high-speed motion of objects, colliding a gas-curve geometry to reduce the number of constraints to increase the robustness of the simulation and to simplify the formulation of the solver. What? This sounds impassable, but at the very least outragiously good. Let's look at three examples of what it can do and see for ourselves if it can live up to its promise. One, it can simulate a steering mechanism full of joints and compact. Yup, an entire servo steering mechanism is simulated with a prescribed mass ratio, loving it. I hereby declare that it passes inspection and now we can take off for some off-roading. All of the movement is simulated really well and wait a minute. Hold on to your papers. Are you seeing what I am seeing? Look, even the tire deformations are part of the simulation. Beautiful. And now, let's do a stress test and race through a bunch of obstacles and see how well those tires can take it. At the end of the video, I will tell you how much time it takes to simulate all this and note that I had to look three times because I could not believe my eyes. Two, herestitution. Or, in other words, we can smash an independent marble into a bunch of others and their combined velocity will be correctly computed. We know for a fact that the computations are correct because, when I stop the video here, you can see that the marbles themselves are smiling. The joys of curved geometry and specular reflections. Of course, this is not true because if we attempt to do the same with a classical earlier technique by the name Position Based Dynamics, this would happen. Yes, the velocities became erroneously large and the marbles jump off of the wire. And they still appear to be very happy about it. Of course, with the new technique, the simulation is much more stable and realistic. Talking about stability, is it stable only in a small-scale simulation or can it take a huge scene with lots of interactions? Would it still work? Well, let's run a stress test and find out. Ha ha, this animation can run all day long and not one thing appears to behave incorrectly. Loving this. Three, it can also simulate these beautiful high-frequency roles that we often experience when we drop a coin on a table. This kind of interaction is very challenging to simulate correctly because of the high-frequency nature of the motion and the curved geometry that interacts with the table. I would love to see a technique that algorithmically generates the sound for this. I could almost hear its sound in my head. Believe it or not, this should be possible and is subject to some research attention in computer graphics. The animation here was given but the sounds were algorithmically generated. Listen. Let me know in the comments if you are one of our OG Fellow Scholars who were here when this episode was published hundreds of videos ago. So, how much do we have to wait to simulate all of these crazy physical interactions? We mentioned that the tires are stiff and take a great deal of computation to simulate properly. So, as always, all nighters, right? Nope. Look at that. Holy matter of papers. The car example takes only 18 milliseconds to compute per frame, which means 55 frames per second. Goodness. Not only do we not need an all-nighter, we don't even need to leave for a coffee break. And the rolling marbles took even less and, woohoo! The high-frequency coin example needs only one third of a millisecond, which means that we can generate more than 3,000 frames with it per second. We not only don't need an all-nighter or a coffee break, we don't even need to wait at all. Now, at the start of the video, I noted that the claim in the abstract sounds almost outrageous. It is because it promises to be able to do more than previous techniques, simplify the simulation algorithm itself, make it more robust, and do all this while being blazing fast. If someone told me that there is a work that does all this at the same time, I would say that give me that paper immediately because I do not believe a word of it. And yet, it really leaves up to its promise. Typically, as a research field matures, we see new techniques that can do more than previous methods, but the price to be paid for it is in the form of complexity. The algorithms get more and more involved over time, and with that, they often get slower and less robust. The engineers in the industry have to decide how much complexity they are willing to shoulder to be able to simulate all of these beautiful interactions. Don't forget, these code bases have to be maintained and improved for many, many years, so choosing a simple base algorithm is of utmost importance. But here, none of these factors need to be considered because there is nearly no trade off here. It is simpler, more robust, and better at the same time. It really feels like we are living in a science fiction world. What a time to be alive! Huge congratulations to scientists at NVIDIA and the University of Copenhagen for this. Don't forget, they could have kept the results for themselves, but they chose to share the details of this algorithm with everyone, free of charge. Thank you so much for doing this. What you see here is a report for a previous paper that we covered in this series, which was made by Wades and Biasis. Wades and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.nb.com slash papers, or click the link in the video description to start tracking your experiments in five minutes. Our thanks to Wades and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Neyfahir."}, {"start": 4.4, "end": 10.64, "text": " Today, with the power of computer graphics research, we can use our computers to run fluid simulations,"}, {"start": 10.64, "end": 18.240000000000002, "text": " simulate, immersing a selection of objects into jelly, or tear meat in a way that much like in reality,"}, {"start": 18.240000000000002, "end": 20.96, "text": " it tears along the muscle fibers."}, {"start": 20.96, "end": 25.92, "text": " If we look at the abstract of this amazing new paper, we see this quoting."}, {"start": 25.92, "end": 31.6, "text": " This allows us to trace high-speed motion of objects, colliding a gas-curve geometry"}, {"start": 31.6, "end": 36.32, "text": " to reduce the number of constraints to increase the robustness of the simulation"}, {"start": 36.32, "end": 39.120000000000005, "text": " and to simplify the formulation of the solver."}, {"start": 39.92, "end": 40.72, "text": " What?"}, {"start": 40.72, "end": 45.2, "text": " This sounds impassable, but at the very least outragiously good."}, {"start": 45.2, "end": 50.56, "text": " Let's look at three examples of what it can do and see for ourselves if it can live up to its promise."}, {"start": 50.56, "end": 55.68, "text": " One, it can simulate a steering mechanism full of joints and compact."}, {"start": 56.56, "end": 63.2, "text": " Yup, an entire servo steering mechanism is simulated with a prescribed mass ratio, loving it."}, {"start": 63.760000000000005, "end": 69.04, "text": " I hereby declare that it passes inspection and now we can take off for some off-roading."}, {"start": 69.6, "end": 73.84, "text": " All of the movement is simulated really well and wait a minute."}, {"start": 74.4, "end": 78.16, "text": " Hold on to your papers. Are you seeing what I am seeing?"}, {"start": 78.16, "end": 82.56, "text": " Look, even the tire deformations are part of the simulation."}, {"start": 82.56, "end": 83.75999999999999, "text": " Beautiful."}, {"start": 83.75999999999999, "end": 90.8, "text": " And now, let's do a stress test and race through a bunch of obstacles and see how well those tires can take it."}, {"start": 90.8, "end": 95.52, "text": " At the end of the video, I will tell you how much time it takes to simulate all this"}, {"start": 95.52, "end": 100.72, "text": " and note that I had to look three times because I could not believe my eyes."}, {"start": 100.72, "end": 102.96, "text": " Two, herestitution."}, {"start": 102.96, "end": 109.36, "text": " Or, in other words, we can smash an independent marble into a bunch of others and their combined"}, {"start": 109.36, "end": 115.19999999999999, "text": " velocity will be correctly computed. We know for a fact that the computations are correct because,"}, {"start": 115.19999999999999, "end": 119.67999999999999, "text": " when I stop the video here, you can see that the marbles themselves are smiling."}, {"start": 120.16, "end": 126.47999999999999, "text": " The joys of curved geometry and specular reflections. Of course, this is not true because if we attempt"}, {"start": 126.48, "end": 133.12, "text": " to do the same with a classical earlier technique by the name Position Based Dynamics, this would happen."}, {"start": 133.12, "end": 138.4, "text": " Yes, the velocities became erroneously large and the marbles jump off of the wire."}, {"start": 138.4, "end": 141.76, "text": " And they still appear to be very happy about it."}, {"start": 141.76, "end": 146.56, "text": " Of course, with the new technique, the simulation is much more stable and realistic."}, {"start": 146.56, "end": 153.2, "text": " Talking about stability, is it stable only in a small-scale simulation or can it take a huge scene"}, {"start": 153.2, "end": 159.28, "text": " with lots of interactions? Would it still work? Well, let's run a stress test and find out."}, {"start": 160.16, "end": 166.23999999999998, "text": " Ha ha, this animation can run all day long and not one thing appears to behave incorrectly."}, {"start": 166.95999999999998, "end": 172.79999999999998, "text": " Loving this. Three, it can also simulate these beautiful high-frequency roles that we often"}, {"start": 172.79999999999998, "end": 178.32, "text": " experience when we drop a coin on a table. This kind of interaction is very challenging to"}, {"start": 178.32, "end": 183.6, "text": " simulate correctly because of the high-frequency nature of the motion and the curved geometry"}, {"start": 183.6, "end": 188.88, "text": " that interacts with the table. I would love to see a technique that algorithmically generates"}, {"start": 188.88, "end": 194.4, "text": " the sound for this. I could almost hear its sound in my head. Believe it or not, this should be"}, {"start": 194.4, "end": 200.64, "text": " possible and is subject to some research attention in computer graphics. The animation here was given"}, {"start": 200.64, "end": 211.35999999999999, "text": " but the sounds were algorithmically generated. Listen. Let me know in the comments if you are one"}, {"start": 211.35999999999999, "end": 216.64, "text": " of our OG Fellow Scholars who were here when this episode was published hundreds of videos ago."}, {"start": 217.2, "end": 221.92, "text": " So, how much do we have to wait to simulate all of these crazy physical interactions?"}, {"start": 222.48, "end": 227.76, "text": " We mentioned that the tires are stiff and take a great deal of computation to simulate properly."}, {"start": 227.76, "end": 236.39999999999998, "text": " So, as always, all nighters, right? Nope. Look at that. Holy matter of papers. The car example"}, {"start": 236.39999999999998, "end": 243.67999999999998, "text": " takes only 18 milliseconds to compute per frame, which means 55 frames per second. Goodness."}, {"start": 243.67999999999998, "end": 248.32, "text": " Not only do we not need an all-nighter, we don't even need to leave for a coffee break."}, {"start": 248.79999999999998, "end": 256.0, "text": " And the rolling marbles took even less and, woohoo! The high-frequency coin example needs only one"}, {"start": 256.0, "end": 262.56, "text": " third of a millisecond, which means that we can generate more than 3,000 frames with it per second."}, {"start": 262.56, "end": 267.28, "text": " We not only don't need an all-nighter or a coffee break, we don't even need to wait at all."}, {"start": 267.92, "end": 273.6, "text": " Now, at the start of the video, I noted that the claim in the abstract sounds almost outrageous."}, {"start": 273.6, "end": 278.32, "text": " It is because it promises to be able to do more than previous techniques,"}, {"start": 278.32, "end": 286.24, "text": " simplify the simulation algorithm itself, make it more robust, and do all this while being blazing fast."}, {"start": 286.24, "end": 290.56, "text": " If someone told me that there is a work that does all this at the same time,"}, {"start": 290.56, "end": 295.84, "text": " I would say that give me that paper immediately because I do not believe a word of it."}, {"start": 295.84, "end": 298.32, "text": " And yet, it really leaves up to its promise."}, {"start": 298.96, "end": 303.92, "text": " Typically, as a research field matures, we see new techniques that can do more than previous"}, {"start": 303.92, "end": 309.44, "text": " methods, but the price to be paid for it is in the form of complexity. The algorithms get more"}, {"start": 309.44, "end": 314.88, "text": " and more involved over time, and with that, they often get slower and less robust."}, {"start": 314.88, "end": 320.0, "text": " The engineers in the industry have to decide how much complexity they are willing to shoulder"}, {"start": 320.0, "end": 325.04, "text": " to be able to simulate all of these beautiful interactions. Don't forget, these code bases"}, {"start": 325.04, "end": 331.28000000000003, "text": " have to be maintained and improved for many, many years, so choosing a simple base algorithm"}, {"start": 331.28, "end": 336.55999999999995, "text": " is of utmost importance. But here, none of these factors need to be considered because there is"}, {"start": 336.55999999999995, "end": 342.79999999999995, "text": " nearly no trade off here. It is simpler, more robust, and better at the same time."}, {"start": 342.79999999999995, "end": 346.4, "text": " It really feels like we are living in a science fiction world."}, {"start": 346.4, "end": 351.84, "text": " What a time to be alive! Huge congratulations to scientists at NVIDIA and the University of"}, {"start": 351.84, "end": 356.08, "text": " Copenhagen for this. Don't forget, they could have kept the results for themselves,"}, {"start": 356.08, "end": 361.68, "text": " but they chose to share the details of this algorithm with everyone, free of charge. Thank you"}, {"start": 361.68, "end": 367.12, "text": " so much for doing this. What you see here is a report for a previous paper that we covered in this"}, {"start": 367.12, "end": 373.12, "text": " series, which was made by Wades and Biasis. Wades and Biasis provides tools to track your experiments"}, {"start": 373.12, "end": 378.32, "text": " in your deep learning projects. Their system is designed to save you a ton of time and money,"}, {"start": 378.32, "end": 385.28, "text": " and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub,"}, {"start": 385.28, "end": 391.11999999999995, "text": " and more. And the best part is that if you have an open source, academic, or personal project,"}, {"start": 391.11999999999995, "end": 395.67999999999995, "text": " you can use their tools for free. It really is as good as it gets."}, {"start": 395.67999999999995, "end": 401.91999999999996, "text": " Make sure to visit them through www.nb.com slash papers, or click the link in the video description"}, {"start": 401.91999999999996, "end": 407.11999999999995, "text": " to start tracking your experiments in five minutes. Our thanks to Wades and Biasis for their"}, {"start": 407.11999999999995, "end": 412.08, "text": " long-standing support and for helping us make better videos for you. Thanks for watching and"}, {"start": 412.08, "end": 422.08, "text": " for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=DxW_kk5LWYQ
Beautiful Elastic Simulations, Now Much Faster!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/safijari/dqn-tutorial/reports/Deep-Q-Networks-with-the-Cartpole-Environment--Vmlldzo4MDc2MQ 📝 The paper "IQ-MPM: An Interface Quadrature Material Point Method for Non-sticky Strongly Two-Way Coupled Nonlinear Solids and Fluids" is available here: https://yzhu.io/publication/mpmcoupling2020siggraph/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir, it is time for some fluids. Hmm-hmm. As many of you know, in this series we often talk about fluid simulations and sometimes the examples showcase a fluid splash, but not much else. However, in real production environments, these simulations often involve complex scenes with many objects that interact with each other and they're in lies the problem. Computing these interactions is called coupling and it is very difficult to get right, but is necessary for many of the beautiful scenes you will see throughout this video. Getting this right is of utmost importance if we wish to create a realistic simulation where fluids and solids interact. So the first question would be, as many of these techniques build upon the material point method or MPM in short, why not just use that? Well, let's do exactly that and see how it does on this scene. Let's draw up the liquid ball on the bunny and... Uh-oh. A lot of it has now stuck to the bunny. This is not supposed to happen. So, what about the improved version of MPM? Yep, still too sticky. And now, let's have a look at how this new technique handles this situation. I want to see dry and floppy bunny ears. Yes, now that's what I'm talking about. Now then, that's great, but what else can this do? A lot more. For instance, we can engage in the favorite pastimes of the computer graphics researcher, which is, of course, destroying objects in a spectacular manner. This is going to be a very challenging scene. Ouch. And now, let physics take care of the rest. This was a harrowing, but beautiful simulation. And we can try to challenge the algorithm even more. Here, we have three elastic spheres filled with water, and now, watch how they deform as they hit the ground and how the water gushes out exactly as it should. And now, hold on to your papers, because there is a great deal to be seen in this animation, but the most important part remains invisible. Get this. All three spheres use a different hyper-elasticity model to demonstrate that this new technique can be plugged into many existing techniques. And it works so seamlessly that I don't think anyone would be able to tell the difference. And it can do even more. For instance, it can also simulate wet sand. Wow. And I say wow, not only because of this beautiful result, but there is more behind it. If you are one of our hardcore long-time fellow scholars, you may remember that three years ago, we needed an entire paper to pull this off. This algorithm is more general and can simulate this kind of interaction between liquids and granular media as an additional side effect. We can also simulate dropping this creature into a piece of fluid, and as we increase the density of the creature, it sinks in in a realistic manner. While we are lifting frogs and help an elastic bear take a bath, let's look at why this technique works so well. The key to achieving these amazing results in a reasonable amount of time is that this new method is able to find these interfaces where the fluids and solids meet and handles their interactions in a way that we can advance the time in our simulation in larger steps than previous methods. This leads to not only these amazingly general and realistic simulations, but they also run faster. Furthermore, I am very happy about the fact that now we can not only simulate these difficult phenomena, but we don't even have to implement a technique for each of them. We can take this one and simulate a wide variety of fluid solid interactions. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their system to train a deep reinforcement learner to become adapt at the card pole balancing problem. This is the algorithm that was able to master a Tari breakout that we often talk about in this series. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 6.72, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir, it is time for some fluids."}, {"start": 6.72, "end": 7.82, "text": " Hmm-hmm."}, {"start": 7.82, "end": 18.18, "text": " As many of you know, in this series we often talk about fluid simulations and sometimes the examples showcase a fluid splash, but not much else."}, {"start": 18.18, "end": 29.0, "text": " However, in real production environments, these simulations often involve complex scenes with many objects that interact with each other and they're in lies the problem."}, {"start": 29.0, "end": 39.4, "text": " Computing these interactions is called coupling and it is very difficult to get right, but is necessary for many of the beautiful scenes you will see throughout this video."}, {"start": 39.4, "end": 46.84, "text": " Getting this right is of utmost importance if we wish to create a realistic simulation where fluids and solids interact."}, {"start": 46.84, "end": 55.56, "text": " So the first question would be, as many of these techniques build upon the material point method or MPM in short, why not just use that?"}, {"start": 55.56, "end": 60.36, "text": " Well, let's do exactly that and see how it does on this scene."}, {"start": 60.36, "end": 64.84, "text": " Let's draw up the liquid ball on the bunny and..."}, {"start": 64.84, "end": 68.44, "text": " Uh-oh. A lot of it has now stuck to the bunny."}, {"start": 68.44, "end": 70.68, "text": " This is not supposed to happen."}, {"start": 70.68, "end": 75.72, "text": " So, what about the improved version of MPM?"}, {"start": 75.72, "end": 79.88, "text": " Yep, still too sticky."}, {"start": 79.88, "end": 90.11999999999999, "text": " And now, let's have a look at how this new technique handles this situation. I want to see dry and floppy bunny ears."}, {"start": 90.11999999999999, "end": 92.84, "text": " Yes, now that's what I'm talking about."}, {"start": 92.84, "end": 96.44, "text": " Now then, that's great, but what else can this do?"}, {"start": 96.44, "end": 106.6, "text": " A lot more. For instance, we can engage in the favorite pastimes of the computer graphics researcher, which is, of course, destroying objects in a spectacular manner."}, {"start": 106.6, "end": 110.52, "text": " This is going to be a very challenging scene."}, {"start": 110.52, "end": 114.6, "text": " Ouch. And now, let physics take care of the rest."}, {"start": 114.6, "end": 118.28, "text": " This was a harrowing, but beautiful simulation."}, {"start": 118.28, "end": 121.8, "text": " And we can try to challenge the algorithm even more."}, {"start": 121.8, "end": 125.47999999999999, "text": " Here, we have three elastic spheres filled with water,"}, {"start": 125.47999999999999, "end": 132.92, "text": " and now, watch how they deform as they hit the ground and how the water gushes out exactly as it should."}, {"start": 132.92, "end": 137.95999999999998, "text": " And now, hold on to your papers, because there is a great deal to be seen in this animation,"}, {"start": 137.95999999999998, "end": 141.48, "text": " but the most important part remains invisible."}, {"start": 141.48, "end": 151.23999999999998, "text": " Get this. All three spheres use a different hyper-elasticity model to demonstrate that this new technique can be plugged into many existing techniques."}, {"start": 151.23999999999998, "end": 156.92, "text": " And it works so seamlessly that I don't think anyone would be able to tell the difference."}, {"start": 156.92, "end": 159.32, "text": " And it can do even more."}, {"start": 159.32, "end": 163.72, "text": " For instance, it can also simulate wet sand. Wow."}, {"start": 163.72, "end": 169.23999999999998, "text": " And I say wow, not only because of this beautiful result, but there is more behind it."}, {"start": 169.23999999999998, "end": 172.6, "text": " If you are one of our hardcore long-time fellow scholars,"}, {"start": 172.6, "end": 177.72, "text": " you may remember that three years ago, we needed an entire paper to pull this off."}, {"start": 177.72, "end": 185.88, "text": " This algorithm is more general and can simulate this kind of interaction between liquids and granular media as an additional side effect."}, {"start": 185.88, "end": 189.64, "text": " We can also simulate dropping this creature into a piece of fluid,"}, {"start": 189.64, "end": 194.51999999999998, "text": " and as we increase the density of the creature, it sinks in in a realistic manner."}, {"start": 198.6, "end": 202.51999999999998, "text": " While we are lifting frogs and help an elastic bear take a bath,"}, {"start": 202.51999999999998, "end": 205.48, "text": " let's look at why this technique works so well."}, {"start": 205.48, "end": 210.6, "text": " The key to achieving these amazing results in a reasonable amount of time is that this new method"}, {"start": 210.6, "end": 216.6, "text": " is able to find these interfaces where the fluids and solids meet and handles their interactions in a way"}, {"start": 216.6, "end": 221.95999999999998, "text": " that we can advance the time in our simulation in larger steps than previous methods."}, {"start": 221.95999999999998, "end": 226.6, "text": " This leads to not only these amazingly general and realistic simulations,"}, {"start": 226.6, "end": 229.24, "text": " but they also run faster."}, {"start": 229.24, "end": 235.32, "text": " Furthermore, I am very happy about the fact that now we can not only simulate these difficult phenomena,"}, {"start": 235.32, "end": 238.68, "text": " but we don't even have to implement a technique for each of them."}, {"start": 238.68, "end": 243.8, "text": " We can take this one and simulate a wide variety of fluid solid interactions."}, {"start": 243.8, "end": 245.72, "text": " What a time to be alive!"}, {"start": 245.72, "end": 248.84, "text": " This episode has been supported by weights and biases."}, {"start": 248.84, "end": 253.88, "text": " In this post, they show you how to use their system to train a deep reinforcement learner"}, {"start": 253.88, "end": 257.48, "text": " to become adapt at the card pole balancing problem."}, {"start": 257.48, "end": 263.48, "text": " This is the algorithm that was able to master a Tari breakout that we often talk about in this series."}, {"start": 263.48, "end": 268.12, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 268.12, "end": 271.8, "text": " Their system is designed to save you a ton of time and money,"}, {"start": 271.8, "end": 275.16, "text": " and it is actively used in projects at prestigious labs,"}, {"start": 275.16, "end": 279.64, "text": " such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 279.64, "end": 284.6, "text": " And the best part is that if you have an open source, academic, or personal project,"}, {"start": 284.6, "end": 286.76, "text": " you can use their tools for free."}, {"start": 286.76, "end": 289.24, "text": " It really is as good as it gets."}, {"start": 289.24, "end": 293.16, "text": " Make sure to visit them through wnb.com slash papers,"}, {"start": 293.16, "end": 298.68, "text": " or click the link in the video description to start tracking your experiments in 5 minutes."}, {"start": 298.68, "end": 301.88000000000005, "text": " Our thanks to weights and biases for their long standing support"}, {"start": 301.88000000000005, "end": 304.68, "text": " and for helping us make better videos for you."}, {"start": 304.68, "end": 332.12, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=2qMw8sOsNg0
What is De-Aging? 🧑
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their report for this paper is available here: https://wandb.ai/wandb/in-domain-gan/reports/In-Domain-GAN-Inversion--VmlldzoyODE5Mzk 📝 The paper "In-Domain GAN Inversion for Real Image Editing" is available here: https://genforce.github.io/idinvert/ Check out the research group's other works, there is lots of cool stuff there: https://genforce.github.io/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #deaging
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Nehfaher. Today, we are living the advent of neural network-based image generation algorithms. What you see here is some super high-quality results from a technique developed by scientists at NVIDIA called Stalgant 2. Right? All of these were generated by a learning algorithm. And while generating images of this quality is a great achievement, but if we have an artistic vision, we wonder, can we bend these images to our will? Can we control them? Well, kind of, and one of the methods that enables us to do that is called image interpolation. This means that we have a reference image for style and a target image, and with this we can morph one human face into another. This is sufficient for some use cases, however, if we are looking for more elaborate edits, we hit a wall. Now, it's good that we already know what Stalgant is, because this new work builds on top of that and shows exceptional image editing and interpolation abilities. Let's start with the image editing part. With this new work, we can give anyone glasses, and a smile, or even better, transform them into a variant of the Mona Lisa. Beautiful! The authors of the paper call this process semantic diffusion. Now, let's have a closer look at the expression and post-change possibilities. I really like that we have fine-grained control over these parameters, and what's even better, we don't just have a start and end point, but all the intermediate images make sense and can stand on their own. This is great for pose and expression, because we can control how big of a smile we are looking for, or even better, we can adjust the age of the test subject with remarkable granularity. Let's go all out! I like how Mr. Comberbatch looks nearly the same as a baby, we might have a new mathematical definition for babyface right there, and apparently Mr. DiCaprio scores a bit lower on that, and I would say that both results are quite credible. Very cool! And now, onto image interpolation. What does this new work bring to the table in this area? Previous techniques are also pretty good at morphing, until we take a closer look at them. Let's continue our journey with three interpolation examples with increasing difficulty. Let's see the easy one first. I was looking for a morphing example with long hair, you will see why right away. This is how the older method did. Uh oh, one more time. Do you see what I see? If I stop the process here, you see that this is an intermediate image that doesn't make sense. The hair over the forehead just suddenly vanishes into the ether. Now, let's see how the new method deals with this issue. Wow, much cleaner, and I can stop nearly anywhere and leave the process with a usable image. Easy example, checkmark. Now, let's see an intermediate level example. Let's go from an old black and white Einstein photo to a recent picture with colors and stop the process at different points, and... Yes, I prefer the picture created with the new technique close to every single time. Do you agree? Let me know in the comments below. Intermediate example, checkmark. And now onwards to the hardest, nastiest example. This is going to sound impossible, but we are going to transform the Eiffel Tower into the Tower Bridge. Yes, that sounds pretty much impossible. So let's see how the conventional interpolation technique did here. Well, that's not good. I would argue that nearly none of the images showcased here would be believable if we stopped the process and took them out. And let's see the new method. Hmm, that makes sense. We start with one tower, then two towers grow from the ground, and look. Wow, the bridge slowly appears between them. That was incredible. While we look at some more results, what really happened here? At the risk of simplifying the contribution of this new paper, we can say that during interpolation, it ensures that we remain within the same domain for the intermediate images. Intuitively, as a result, we get less nonsense in the outputs and can pull off morphing not only between human faces, but even go from a black and white photo to a colored one. And what's more, it can even deal with completely different building types. Or, you know, just transform people into Mona Lisa variants. Absolutely amazing. What a time to be alive. What you see here is a report of this exact paper we have talked about which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Wades and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through WNB.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to Wades and Biasis for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Nehfaher."}, {"start": 4.4, "end": 9.4, "text": " Today, we are living the advent of neural network-based image generation algorithms."}, {"start": 9.4, "end": 17.7, "text": " What you see here is some super high-quality results from a technique developed by scientists at NVIDIA called Stalgant 2."}, {"start": 17.7, "end": 21.8, "text": " Right? All of these were generated by a learning algorithm."}, {"start": 21.8, "end": 25.900000000000002, "text": " And while generating images of this quality is a great achievement,"}, {"start": 25.9, "end": 31.4, "text": " but if we have an artistic vision, we wonder, can we bend these images to our will?"}, {"start": 31.4, "end": 33.1, "text": " Can we control them?"}, {"start": 33.1, "end": 39.8, "text": " Well, kind of, and one of the methods that enables us to do that is called image interpolation."}, {"start": 39.8, "end": 44.0, "text": " This means that we have a reference image for style and a target image,"}, {"start": 44.0, "end": 48.599999999999994, "text": " and with this we can morph one human face into another."}, {"start": 48.599999999999994, "end": 50.9, "text": " This is sufficient for some use cases,"}, {"start": 50.9, "end": 55.599999999999994, "text": " however, if we are looking for more elaborate edits, we hit a wall."}, {"start": 55.6, "end": 58.6, "text": " Now, it's good that we already know what Stalgant is,"}, {"start": 58.6, "end": 66.3, "text": " because this new work builds on top of that and shows exceptional image editing and interpolation abilities."}, {"start": 66.3, "end": 68.7, "text": " Let's start with the image editing part."}, {"start": 68.7, "end": 73.5, "text": " With this new work, we can give anyone glasses,"}, {"start": 73.5, "end": 75.8, "text": " and a smile,"}, {"start": 75.8, "end": 83.4, "text": " or even better, transform them into a variant of the Mona Lisa."}, {"start": 83.4, "end": 88.60000000000001, "text": " Beautiful! The authors of the paper call this process semantic diffusion."}, {"start": 88.60000000000001, "end": 92.9, "text": " Now, let's have a closer look at the expression and post-change possibilities."}, {"start": 92.9, "end": 96.80000000000001, "text": " I really like that we have fine-grained control over these parameters,"}, {"start": 96.80000000000001, "end": 100.80000000000001, "text": " and what's even better, we don't just have a start and end point,"}, {"start": 100.80000000000001, "end": 105.9, "text": " but all the intermediate images make sense and can stand on their own."}, {"start": 105.9, "end": 108.4, "text": " This is great for pose and expression,"}, {"start": 108.4, "end": 111.9, "text": " because we can control how big of a smile we are looking for,"}, {"start": 111.9, "end": 118.4, "text": " or even better, we can adjust the age of the test subject with remarkable granularity."}, {"start": 118.4, "end": 121.4, "text": " Let's go all out!"}, {"start": 121.4, "end": 125.60000000000001, "text": " I like how Mr. Comberbatch looks nearly the same as a baby,"}, {"start": 125.60000000000001, "end": 129.8, "text": " we might have a new mathematical definition for babyface right there,"}, {"start": 129.8, "end": 133.8, "text": " and apparently Mr. DiCaprio scores a bit lower on that,"}, {"start": 133.8, "end": 137.9, "text": " and I would say that both results are quite credible."}, {"start": 137.9, "end": 141.9, "text": " Very cool! And now, onto image interpolation."}, {"start": 141.9, "end": 145.4, "text": " What does this new work bring to the table in this area?"}, {"start": 145.4, "end": 148.4, "text": " Previous techniques are also pretty good at morphing,"}, {"start": 148.4, "end": 150.9, "text": " until we take a closer look at them."}, {"start": 150.9, "end": 156.20000000000002, "text": " Let's continue our journey with three interpolation examples with increasing difficulty."}, {"start": 156.20000000000002, "end": 158.4, "text": " Let's see the easy one first."}, {"start": 158.4, "end": 163.70000000000002, "text": " I was looking for a morphing example with long hair, you will see why right away."}, {"start": 163.70000000000002, "end": 166.70000000000002, "text": " This is how the older method did."}, {"start": 166.7, "end": 171.2, "text": " Uh oh, one more time."}, {"start": 171.2, "end": 173.2, "text": " Do you see what I see?"}, {"start": 173.2, "end": 179.2, "text": " If I stop the process here, you see that this is an intermediate image that doesn't make sense."}, {"start": 179.2, "end": 184.2, "text": " The hair over the forehead just suddenly vanishes into the ether."}, {"start": 184.2, "end": 189.7, "text": " Now, let's see how the new method deals with this issue."}, {"start": 189.7, "end": 197.2, "text": " Wow, much cleaner, and I can stop nearly anywhere and leave the process with a usable image."}, {"start": 197.2, "end": 199.2, "text": " Easy example, checkmark."}, {"start": 199.2, "end": 202.2, "text": " Now, let's see an intermediate level example."}, {"start": 202.2, "end": 207.2, "text": " Let's go from an old black and white Einstein photo to a recent picture with colors"}, {"start": 207.2, "end": 210.7, "text": " and stop the process at different points, and..."}, {"start": 210.7, "end": 216.7, "text": " Yes, I prefer the picture created with the new technique close to every single time."}, {"start": 216.7, "end": 220.2, "text": " Do you agree? Let me know in the comments below."}, {"start": 220.2, "end": 222.7, "text": " Intermediate example, checkmark."}, {"start": 222.7, "end": 226.5, "text": " And now onwards to the hardest, nastiest example."}, {"start": 226.5, "end": 233.7, "text": " This is going to sound impossible, but we are going to transform the Eiffel Tower into the Tower Bridge."}, {"start": 233.7, "end": 236.7, "text": " Yes, that sounds pretty much impossible."}, {"start": 236.7, "end": 240.7, "text": " So let's see how the conventional interpolation technique did here."}, {"start": 240.7, "end": 243.2, "text": " Well, that's not good."}, {"start": 243.2, "end": 250.2, "text": " I would argue that nearly none of the images showcased here would be believable if we stopped the process and took them out."}, {"start": 250.2, "end": 253.2, "text": " And let's see the new method."}, {"start": 253.2, "end": 255.2, "text": " Hmm, that makes sense."}, {"start": 255.2, "end": 264.2, "text": " We start with one tower, then two towers grow from the ground, and look."}, {"start": 264.2, "end": 267.7, "text": " Wow, the bridge slowly appears between them."}, {"start": 267.7, "end": 269.2, "text": " That was incredible."}, {"start": 269.2, "end": 272.7, "text": " While we look at some more results, what really happened here?"}, {"start": 272.7, "end": 282.7, "text": " At the risk of simplifying the contribution of this new paper, we can say that during interpolation, it ensures that we remain within the same domain for the intermediate images."}, {"start": 282.7, "end": 293.7, "text": " Intuitively, as a result, we get less nonsense in the outputs and can pull off morphing not only between human faces, but even go from a black and white photo to a colored one."}, {"start": 293.7, "end": 298.7, "text": " And what's more, it can even deal with completely different building types."}, {"start": 298.7, "end": 302.7, "text": " Or, you know, just transform people into Mona Lisa variants."}, {"start": 302.7, "end": 304.7, "text": " Absolutely amazing."}, {"start": 304.7, "end": 306.7, "text": " What a time to be alive."}, {"start": 306.7, "end": 312.7, "text": " What you see here is a report of this exact paper we have talked about which was made by Wades and Biasis."}, {"start": 312.7, "end": 314.7, "text": " I put a link to it in the description."}, {"start": 314.7, "end": 318.7, "text": " Make sure to have a look. I think it helps you understand this paper better."}, {"start": 318.7, "end": 323.7, "text": " Wades and Biasis provides tools to track your experiments in your deep learning projects."}, {"start": 323.7, "end": 335.7, "text": " Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 335.7, "end": 342.7, "text": " And the best part is that if you have an open source, academic or personal project, you can use their tools for free."}, {"start": 342.7, "end": 344.7, "text": " It really is as good as it gets."}, {"start": 344.7, "end": 354.7, "text": " Make sure to visit them through WNB.com slash papers or click the link in the video description to start tracking your experiments in five minutes."}, {"start": 354.7, "end": 360.7, "text": " Our thanks to Wades and Biasis for their long standing support and for helping us make better videos for you."}, {"start": 360.7, "end": 375.7, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=5ePD83StI6A
This Is What Simulating a 100 Million Particles Looks Like!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned instrumentation is available here: https://app.wandb.ai/stacey/sfmlearner/reports/See-3D-from-Video%3A-Depth-Perception-for-Self-Driving-Cars--Vmlldzo2Nzg2Nw Our Instagram page with the slow-motion footage is available here: https://www.instagram.com/twominutepapers/ 📝 The paper "A Massively Parallel and Scalable Multi-GPU Material Point Method " is available here: https://sites.google.com/view/siggraph2020-multigpu 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejone Fahir. If we study the laws of fluid motion and implement them in a computer program, we can create and enjoy these beautiful fluid simulations. And not only that, but today, with the amazing progress in computer graphics research, we can even enrich our physics simulations with anisotropic damage and elasticity. So, what does that mean exactly? This means that we can simulate more extreme topological changes in these virtual objects. This leads to better material separation when the damage happens. So, it appears that today we can do a great deal, but these techniques are pretty complex and take quite a while to compute and they typically run on your processor. That's a pity, because many powerful consumer computers also have a graphics card, and if we could restate some of these algorithms to be able to run them on those, they would run significantly faster. So, do we have any hope for that? Well, today's paper promises a new particle data structure that is better suited for number crunching on the graphics card and is, hence, much faster than its predecessors. As a result, this runs the material point method, the algorithm that is capable of simulating these wondrous things you are seeing here, not only on your graphics card, but the work on one problem can also be distributed between many, many graphics cards. This means that we get crushing concrete, falling soil, candy balls, sand armadillos, oh my, you name it, and all this much faster than before. Now, since these are some devilishly detailed simulations, please do not expect several frames per second kind of performance, we are still in the second's per frame region, but we are not that far away. For instance, hold onto your papers, because this candy ball example contains nearly 23 million particles, and despite that, it runs in about 4 seconds per frame on a system equipped with 4 graphics cards. 4 seconds per frame. My goodness, if somebody told this to me today without showing me this paper, I would have not believed a word of it. But there is more. You know what? Let's double the number of particles and pull up this damn break scene. What you see here is 48 million particles that runs in 15 seconds per frame. Let's do even more. These sand armadillos contain a total of 55 million particles and take about 30 seconds per frame. And in return, look at that beautiful mixture of the two sand materials. And with half a minute per frame, that's a great deal. I'll take this any day of the week. And if we wish to simulate crushing this piece of concrete with a hydraulic press, that will take nearly 100 million particles. Just look at that footage. This is an obscene amount of detail, and the price to be paid for this is nearly 4 minutes of simulation time for every frame that you see on the screen here. 4 minutes you say. 1. That's a little more than expected. Why is that? We had several seconds per frame for the others, not minutes per frame. Well, it is because the particle count matters a great deal, but that's not the only consideration for such a simulation. For instance, here you see with delta T something that we call time step size. The smaller this number is, the tinier the time steps with which we can advance the simulation when computing every interaction and hence the more steps there are to compute. In simpler words, generally, time step size is also an important factor in the computation time and the smaller this is, the slower the simulation will be. As you see, we have to simulate 5 times more steps to make sure that we don't miss any particle interactions and hence this takes much longer. Now, this one appears to be perhaps the simplest simulation of the bunch, isn't it? No, no, no, quite the opposite. If you have been holding onto your paper so far, now squeeze that paper and watch carefully. There we go. There we go. Why do there we go? These are just around 6000 bombs, which is not a lot, however, wait a minute. Each bomb is a collection of particles giving us a total of not immediately 6000, but a whopping 134 million particle simulation and hence we may think that it's nearly impossible to perform in a reasonable amount of time. The time steps are not that far apart for this one so we can do it in less than 1 minute per frame. This was nearly impossible when I started my PhD and today less than a minute for one frame. It truly feels like we are living in a science fiction world. What a time to be alive. I also couldn't resist creating a slow-motion version of some of these videos so if this is something that you wish to see, make sure to visit our Instagram page in the video description for more. What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejone Fahir."}, {"start": 4.8, "end": 10.38, "text": " If we study the laws of fluid motion and implement them in a computer program, we can create"}, {"start": 10.38, "end": 13.76, "text": " and enjoy these beautiful fluid simulations."}, {"start": 13.76, "end": 18.92, "text": " And not only that, but today, with the amazing progress in computer graphics research, we"}, {"start": 18.92, "end": 25.12, "text": " can even enrich our physics simulations with anisotropic damage and elasticity."}, {"start": 25.12, "end": 27.64, "text": " So, what does that mean exactly?"}, {"start": 27.64, "end": 32.8, "text": " This means that we can simulate more extreme topological changes in these virtual objects."}, {"start": 32.8, "end": 36.88, "text": " This leads to better material separation when the damage happens."}, {"start": 36.88, "end": 42.72, "text": " So, it appears that today we can do a great deal, but these techniques are pretty complex"}, {"start": 42.72, "end": 47.72, "text": " and take quite a while to compute and they typically run on your processor."}, {"start": 47.72, "end": 53.28, "text": " That's a pity, because many powerful consumer computers also have a graphics card, and if"}, {"start": 53.28, "end": 57.84, "text": " we could restate some of these algorithms to be able to run them on those, they would"}, {"start": 57.84, "end": 60.08, "text": " run significantly faster."}, {"start": 60.08, "end": 62.92, "text": " So, do we have any hope for that?"}, {"start": 62.92, "end": 68.0, "text": " Well, today's paper promises a new particle data structure that is better suited for"}, {"start": 68.0, "end": 74.56, "text": " number crunching on the graphics card and is, hence, much faster than its predecessors."}, {"start": 74.56, "end": 79.84, "text": " As a result, this runs the material point method, the algorithm that is capable of simulating"}, {"start": 79.84, "end": 84.84, "text": " these wondrous things you are seeing here, not only on your graphics card, but the work"}, {"start": 84.84, "end": 90.28, "text": " on one problem can also be distributed between many, many graphics cards."}, {"start": 90.28, "end": 96.48, "text": " This means that we get crushing concrete, falling soil, candy balls, sand armadillos,"}, {"start": 96.48, "end": 101.68, "text": " oh my, you name it, and all this much faster than before."}, {"start": 101.68, "end": 107.12, "text": " Now, since these are some devilishly detailed simulations, please do not expect several"}, {"start": 107.12, "end": 112.68, "text": " frames per second kind of performance, we are still in the second's per frame region,"}, {"start": 112.68, "end": 115.24000000000001, "text": " but we are not that far away."}, {"start": 115.24000000000001, "end": 120.76, "text": " For instance, hold onto your papers, because this candy ball example contains nearly 23"}, {"start": 120.76, "end": 126.88000000000001, "text": " million particles, and despite that, it runs in about 4 seconds per frame on a system"}, {"start": 126.88000000000001, "end": 129.76, "text": " equipped with 4 graphics cards."}, {"start": 129.76, "end": 131.52, "text": " 4 seconds per frame."}, {"start": 131.52, "end": 136.92000000000002, "text": " My goodness, if somebody told this to me today without showing me this paper, I would"}, {"start": 136.92, "end": 139.64, "text": " have not believed a word of it."}, {"start": 139.64, "end": 141.23999999999998, "text": " But there is more."}, {"start": 141.23999999999998, "end": 142.23999999999998, "text": " You know what?"}, {"start": 142.23999999999998, "end": 146.48, "text": " Let's double the number of particles and pull up this damn break scene."}, {"start": 146.48, "end": 152.35999999999999, "text": " What you see here is 48 million particles that runs in 15 seconds per frame."}, {"start": 152.35999999999999, "end": 154.72, "text": " Let's do even more."}, {"start": 154.72, "end": 160.88, "text": " These sand armadillos contain a total of 55 million particles and take about 30 seconds per"}, {"start": 160.88, "end": 161.88, "text": " frame."}, {"start": 161.88, "end": 166.88, "text": " And in return, look at that beautiful mixture of the two sand materials."}, {"start": 166.88, "end": 169.79999999999998, "text": " And with half a minute per frame, that's a great deal."}, {"start": 169.79999999999998, "end": 172.92, "text": " I'll take this any day of the week."}, {"start": 172.92, "end": 177.6, "text": " And if we wish to simulate crushing this piece of concrete with a hydraulic press, that"}, {"start": 177.6, "end": 181.4, "text": " will take nearly 100 million particles."}, {"start": 181.4, "end": 183.4, "text": " Just look at that footage."}, {"start": 183.4, "end": 188.56, "text": " This is an obscene amount of detail, and the price to be paid for this is nearly 4 minutes"}, {"start": 188.56, "end": 193.64, "text": " of simulation time for every frame that you see on the screen here."}, {"start": 193.64, "end": 195.32, "text": " 4 minutes you say."}, {"start": 195.32, "end": 196.32, "text": " 1."}, {"start": 196.32, "end": 198.23999999999998, "text": " That's a little more than expected."}, {"start": 198.23999999999998, "end": 199.35999999999999, "text": " Why is that?"}, {"start": 199.35999999999999, "end": 204.23999999999998, "text": " We had several seconds per frame for the others, not minutes per frame."}, {"start": 204.23999999999998, "end": 210.23999999999998, "text": " Well, it is because the particle count matters a great deal, but that's not the only consideration"}, {"start": 210.23999999999998, "end": 212.04, "text": " for such a simulation."}, {"start": 212.04, "end": 217.32, "text": " For instance, here you see with delta T something that we call time step size."}, {"start": 217.32, "end": 222.6, "text": " The smaller this number is, the tinier the time steps with which we can advance the simulation"}, {"start": 222.6, "end": 227.72, "text": " when computing every interaction and hence the more steps there are to compute."}, {"start": 227.72, "end": 233.07999999999998, "text": " In simpler words, generally, time step size is also an important factor in the computation"}, {"start": 233.07999999999998, "end": 237.51999999999998, "text": " time and the smaller this is, the slower the simulation will be."}, {"start": 237.51999999999998, "end": 242.32, "text": " As you see, we have to simulate 5 times more steps to make sure that we don't miss any"}, {"start": 242.32, "end": 246.6, "text": " particle interactions and hence this takes much longer."}, {"start": 246.6, "end": 252.56, "text": " Now, this one appears to be perhaps the simplest simulation of the bunch, isn't it?"}, {"start": 252.56, "end": 255.68, "text": " No, no, no, quite the opposite."}, {"start": 255.68, "end": 262.12, "text": " If you have been holding onto your paper so far, now squeeze that paper and watch carefully."}, {"start": 262.12, "end": 263.52, "text": " There we go."}, {"start": 263.52, "end": 265.6, "text": " There we go."}, {"start": 265.6, "end": 267.2, "text": " Why do there we go?"}, {"start": 267.2, "end": 273.36, "text": " These are just around 6000 bombs, which is not a lot, however, wait a minute."}, {"start": 273.36, "end": 279.92, "text": " Each bomb is a collection of particles giving us a total of not immediately 6000, but a"}, {"start": 279.92, "end": 287.36, "text": " whopping 134 million particle simulation and hence we may think that it's nearly impossible"}, {"start": 287.36, "end": 290.24, "text": " to perform in a reasonable amount of time."}, {"start": 290.24, "end": 295.24, "text": " The time steps are not that far apart for this one so we can do it in less than 1 minute"}, {"start": 295.24, "end": 296.44, "text": " per frame."}, {"start": 296.44, "end": 303.40000000000003, "text": " This was nearly impossible when I started my PhD and today less than a minute for one frame."}, {"start": 303.40000000000003, "end": 307.20000000000005, "text": " It truly feels like we are living in a science fiction world."}, {"start": 307.20000000000005, "end": 308.84000000000003, "text": " What a time to be alive."}, {"start": 308.84, "end": 313.71999999999997, "text": " I also couldn't resist creating a slow-motion version of some of these videos so if this"}, {"start": 313.71999999999997, "end": 318.28, "text": " is something that you wish to see, make sure to visit our Instagram page in the video description"}, {"start": 318.28, "end": 319.28, "text": " for more."}, {"start": 319.28, "end": 323.84, "text": " What you see here is an instrumentation for a previous paper that we covered in this"}, {"start": 323.84, "end": 327.12, "text": " series which was made by weights and biases."}, {"start": 327.12, "end": 332.55999999999995, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 332.55999999999995, "end": 337.2, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 337.2, "end": 342.0, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 342.0, "end": 348.76, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 348.76, "end": 353.68, "text": " And the best part is that if you have an open source, academic or personal project,"}, {"start": 353.68, "end": 355.8, "text": " you can use their tools for free."}, {"start": 355.8, "end": 358.36, "text": " It really is as good as it gets."}, {"start": 358.36, "end": 364.44, "text": " Make sure to visit them through wnb.com slash papers or click the link in the video description"}, {"start": 364.44, "end": 367.76, "text": " to start tracking your experiments in 5 minutes."}, {"start": 367.76, "end": 372.36, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 372.36, "end": 373.68, "text": " better videos for you."}, {"start": 373.68, "end": 403.24, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=86QU7_SF16Q
Remove This! ✂️ AI-Based Video Completion is Amazing!
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Flow-edge Guided Video Completion" is available here: http://chengao.vision/FGVC/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Carlos John Aifahir. Have you ever had a moment where you took the perfect photo, but upon closer inspection, there was this one annoying thing that ruined the whole picture? Well, why not just take a learning algorithm to erase those cracks in the facade of a building or a photo bombing ship? Or to even reimagine ourselves with different high-colors, we can try one of the many research works that are capable of something that we call image-impainting. What you see here is the legendary patch-match algorithm at work, which, believe it or not, is a handcrafted technique from more than ten years ago. Later, scientists at NVIDIA published a more modern impainter that uses a learning-based algorithm to do this more reliably and for a greater variety of images. This all work really well, but the common denominator for these techniques is that they all work on impainting still images. Would this be a possibility for video? Like removing a moving object or a person from a video. Is this possible or is it science fiction? Let's see if these learning-based techniques can really do more. And now, hold on to your papers because this new work can really perform proper impainting for video. Let's give it a try by highlighting this human. And pro-tip also highlight the shadowy region for impainting to make sure that not only the human, but its silhouette also disappears from the footage. And look, wow! Let's look at some other examples. Now that's really something because video is much more difficult due to the requirement of temporal coherence, which means that it's not nearly enough if the images are impainted really well individually, they also have to look good if we weave them together into a video. You will hear and see more about this in a moment. Not only that, but if we highlight a person, this person not only needs to be impainted, but we also have to track the boundaries of this person throughout the footage and then impaint a moving region. We get some help with that, which I will also talk about in a moment. Now, as you see here, these all work extremely well, and believe it or not, you have seen nothing yet because so far, another common denominator in these examples was that we highlighted regions inside the video. But that's not all. If you have been holding onto your paper so far, now squeeze that paper because we can also go outside and expand our video, spatially, with even more content. This one is very short, so I will keep looping it. Are you ready? Let's go. Wow! My goodness! The information from inside of the video frames is reused to infer what should be around the video frame and all this in a temporalic coherent manner. Now, of course, this is not the first technique to perform this, so let's see how it compares to the competition by erasing this bear from the video footage. The remnants of the bear are visible with a wide selection of previously published techniques from the last few years. This is true even for these four methods from last year. And let's see how this new method did on the same case. Yup! Very good. Not perfect, we still see some flickering. This is the temporal coherence example or the leg thereof that I have promised earlier. But now, let's look at this example with the BMX rider. We see similar performance with the previous techniques. And now, let's have a look at the new one. Now, that's what I'm talking about. Not a trace left from this person, the only clue that we get in reconstructing what went down here is the camera movement. It truly feels like we are living in a science fiction world. What a time to be alive! Now these were the qualitative results, and now let's have a look at the quantitative results. In other words, we saw the videos, now let's see what the numbers say. We could talk all day about the peak signal to noise ratios, or structural similarity, or other ways to measure how good these techniques are, but you will see in a moment that it is completely unnecessary. Why is that? Well, you see here that the second best results are underscored and highlighted with blue. As you see, there is plenty of competition as the blues are all over the place. But there is no competition at all for the first place, because this new method smokes the competition in every category. This was measured on a data set by the name Densely Annotated Video segmentation Davis, in short, this contains 150 video sequences, and it is annotated, which means that many of the objects are highlighted throughout this video, so for the cases in this data set, we don't have to deal with the tracking ourselves. I am truly out of ideas as to what I should wish for two more papers down the line. Maybe not only removing the tennis player, but putting myself in there as a proxy. We can already grab a controller and play as if we were real characters in real broadcast footage, so who really knows? Anything is possible. Let me know in the comments what you have in mind for potential applications, and what you would be excited to see two more papers down the line. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.84, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Carlos John Aifahir."}, {"start": 4.84, "end": 10.040000000000001, "text": " Have you ever had a moment where you took the perfect photo, but upon closer inspection,"}, {"start": 10.040000000000001, "end": 13.96, "text": " there was this one annoying thing that ruined the whole picture?"}, {"start": 13.96, "end": 19.6, "text": " Well, why not just take a learning algorithm to erase those cracks in the facade of a building"}, {"start": 19.6, "end": 21.92, "text": " or a photo bombing ship?"}, {"start": 21.92, "end": 27.36, "text": " Or to even reimagine ourselves with different high-colors, we can try one of the many research"}, {"start": 27.36, "end": 32.12, "text": " works that are capable of something that we call image-impainting."}, {"start": 32.12, "end": 36.96, "text": " What you see here is the legendary patch-match algorithm at work, which, believe it or"}, {"start": 36.96, "end": 41.6, "text": " not, is a handcrafted technique from more than ten years ago."}, {"start": 41.6, "end": 47.08, "text": " Later, scientists at NVIDIA published a more modern impainter that uses a learning-based"}, {"start": 47.08, "end": 52.84, "text": " algorithm to do this more reliably and for a greater variety of images."}, {"start": 52.84, "end": 57.32000000000001, "text": " This all work really well, but the common denominator for these techniques is that they"}, {"start": 57.32000000000001, "end": 60.56, "text": " all work on impainting still images."}, {"start": 60.56, "end": 63.28, "text": " Would this be a possibility for video?"}, {"start": 63.28, "end": 67.44, "text": " Like removing a moving object or a person from a video."}, {"start": 67.44, "end": 70.92, "text": " Is this possible or is it science fiction?"}, {"start": 70.92, "end": 74.48, "text": " Let's see if these learning-based techniques can really do more."}, {"start": 74.48, "end": 79.88, "text": " And now, hold on to your papers because this new work can really perform proper impainting"}, {"start": 79.88, "end": 81.4, "text": " for video."}, {"start": 81.4, "end": 84.68, "text": " Let's give it a try by highlighting this human."}, {"start": 84.68, "end": 90.12, "text": " And pro-tip also highlight the shadowy region for impainting to make sure that not only"}, {"start": 90.12, "end": 94.96000000000001, "text": " the human, but its silhouette also disappears from the footage."}, {"start": 94.96000000000001, "end": 98.16000000000001, "text": " And look, wow!"}, {"start": 98.16000000000001, "end": 100.76, "text": " Let's look at some other examples."}, {"start": 100.76, "end": 105.60000000000001, "text": " Now that's really something because video is much more difficult due to the requirement"}, {"start": 105.60000000000001, "end": 110.72, "text": " of temporal coherence, which means that it's not nearly enough if the images are impainted"}, {"start": 110.72, "end": 115.96, "text": " really well individually, they also have to look good if we weave them together into"}, {"start": 115.96, "end": 116.96, "text": " a video."}, {"start": 116.96, "end": 120.4, "text": " You will hear and see more about this in a moment."}, {"start": 120.4, "end": 125.88, "text": " Not only that, but if we highlight a person, this person not only needs to be impainted,"}, {"start": 125.88, "end": 130.48, "text": " but we also have to track the boundaries of this person throughout the footage and then"}, {"start": 130.48, "end": 132.6, "text": " impaint a moving region."}, {"start": 132.6, "end": 136.4, "text": " We get some help with that, which I will also talk about in a moment."}, {"start": 136.4, "end": 142.36, "text": " Now, as you see here, these all work extremely well, and believe it or not, you have seen"}, {"start": 142.36, "end": 148.24, "text": " nothing yet because so far, another common denominator in these examples was that we highlighted"}, {"start": 148.24, "end": 150.92000000000002, "text": " regions inside the video."}, {"start": 150.92000000000002, "end": 152.0, "text": " But that's not all."}, {"start": 152.0, "end": 156.6, "text": " If you have been holding onto your paper so far, now squeeze that paper because we can"}, {"start": 156.6, "end": 162.96, "text": " also go outside and expand our video, spatially, with even more content."}, {"start": 162.96, "end": 166.0, "text": " This one is very short, so I will keep looping it."}, {"start": 166.0, "end": 167.0, "text": " Are you ready?"}, {"start": 167.0, "end": 168.0, "text": " Let's go."}, {"start": 168.0, "end": 169.0, "text": " Wow!"}, {"start": 169.0, "end": 170.0, "text": " My goodness!"}, {"start": 170.0, "end": 175.4, "text": " The information from inside of the video frames is reused to infer what should be around"}, {"start": 175.4, "end": 179.68, "text": " the video frame and all this in a temporalic coherent manner."}, {"start": 179.68, "end": 185.32, "text": " Now, of course, this is not the first technique to perform this, so let's see how it compares"}, {"start": 185.32, "end": 189.72, "text": " to the competition by erasing this bear from the video footage."}, {"start": 189.72, "end": 194.6, "text": " The remnants of the bear are visible with a wide selection of previously published techniques"}, {"start": 194.6, "end": 196.79999999999998, "text": " from the last few years."}, {"start": 196.79999999999998, "end": 204.12, "text": " This is true even for these four methods from last year."}, {"start": 204.12, "end": 208.92, "text": " And let's see how this new method did on the same case."}, {"start": 208.92, "end": 209.92, "text": " Yup!"}, {"start": 209.92, "end": 211.92, "text": " Very good."}, {"start": 211.92, "end": 214.76, "text": " Not perfect, we still see some flickering."}, {"start": 214.76, "end": 219.92, "text": " This is the temporal coherence example or the leg thereof that I have promised earlier."}, {"start": 219.92, "end": 223.6, "text": " But now, let's look at this example with the BMX rider."}, {"start": 223.6, "end": 229.32, "text": " We see similar performance with the previous techniques."}, {"start": 229.32, "end": 232.12, "text": " And now, let's have a look at the new one."}, {"start": 232.12, "end": 234.44, "text": " Now, that's what I'm talking about."}, {"start": 234.44, "end": 239.64, "text": " Not a trace left from this person, the only clue that we get in reconstructing what went"}, {"start": 239.64, "end": 242.16, "text": " down here is the camera movement."}, {"start": 242.16, "end": 245.92, "text": " It truly feels like we are living in a science fiction world."}, {"start": 245.92, "end": 247.95999999999998, "text": " What a time to be alive!"}, {"start": 247.95999999999998, "end": 253.44, "text": " Now these were the qualitative results, and now let's have a look at the quantitative results."}, {"start": 253.44, "end": 258.2, "text": " In other words, we saw the videos, now let's see what the numbers say."}, {"start": 258.2, "end": 264.04, "text": " We could talk all day about the peak signal to noise ratios, or structural similarity,"}, {"start": 264.04, "end": 268.64, "text": " or other ways to measure how good these techniques are, but you will see in a moment that it"}, {"start": 268.64, "end": 271.2, "text": " is completely unnecessary."}, {"start": 271.2, "end": 272.2, "text": " Why is that?"}, {"start": 272.2, "end": 278.24, "text": " Well, you see here that the second best results are underscored and highlighted with blue."}, {"start": 278.24, "end": 282.96, "text": " As you see, there is plenty of competition as the blues are all over the place."}, {"start": 282.96, "end": 287.56, "text": " But there is no competition at all for the first place, because this new method smokes"}, {"start": 287.56, "end": 290.56, "text": " the competition in every category."}, {"start": 290.56, "end": 295.91999999999996, "text": " This was measured on a data set by the name Densely Annotated Video segmentation Davis, in"}, {"start": 295.91999999999996, "end": 302.52, "text": " short, this contains 150 video sequences, and it is annotated, which means that many"}, {"start": 302.52, "end": 307.64, "text": " of the objects are highlighted throughout this video, so for the cases in this data set,"}, {"start": 307.64, "end": 310.47999999999996, "text": " we don't have to deal with the tracking ourselves."}, {"start": 310.48, "end": 315.72, "text": " I am truly out of ideas as to what I should wish for two more papers down the line."}, {"start": 315.72, "end": 321.12, "text": " Maybe not only removing the tennis player, but putting myself in there as a proxy."}, {"start": 321.12, "end": 326.52000000000004, "text": " We can already grab a controller and play as if we were real characters in real broadcast"}, {"start": 326.52000000000004, "end": 329.44, "text": " footage, so who really knows?"}, {"start": 329.44, "end": 330.76, "text": " Anything is possible."}, {"start": 330.76, "end": 334.88, "text": " Let me know in the comments what you have in mind for potential applications, and what"}, {"start": 334.88, "end": 338.64000000000004, "text": " you would be excited to see two more papers down the line."}, {"start": 338.64, "end": 342.08, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 342.08, "end": 348.03999999999996, "text": " If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 348.03999999999996, "end": 355.96, "text": " They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your"}, {"start": 355.96, "end": 362.47999999999996, "text": " papers because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 362.47999999999996, "end": 367.91999999999996, "text": " Plus they are the only Cloud service with 48GB RTX 8000."}, {"start": 367.92, "end": 374.32, "text": " And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 374.32, "end": 376.12, "text": " workstations, or servers."}, {"start": 376.12, "end": 382.12, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 382.12, "end": 383.12, "text": " today."}, {"start": 383.12, "end": 387.6, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 387.6, "end": 388.6, "text": " for you."}, {"start": 388.6, "end": 415.72, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=OzHenjHBBds
Enhance! Neural Supersampling is Here! 🔎
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://www.wandb.com/articles/code-comparer 📝 The paper "Neural Supersampling for Real-time Rendering" is available here: https://research.fb.com/blog/2020/07/introducing-neural-supersampling-for-real-time-rendering/ https://research.fb.com/publications/neural-supersampling-for-real-time-rendering/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karojejona Ifehir. Let's talk about video super resolution. The problem statement is simple, in goes a course video, the technique analyzes it, guesses what's missing, and out comes a detailed video. You know the CSI thing, and hence, however, of course, reliably solving this problem is anything that simple. This previous method is called TACOGAN and it is able to give us results that are often very close to reality. It is truly amazing how much this technique understands the world around us from just this training set of low and high resolution videos. However, as amazing as super resolution is, it is not the most reliable way of delivering high quality images in many real-time applications, for instance, video games. Note that in these applications, we typically have more data at our disposal, but in return, the requirements are also higher. We need high quality images, at least 60 times per second, temporal coherence is a necessity, or in other words, no jarring jumps and flickering is permitted. And hence, one of the techniques often used in these cases is called super sampling. At the risk of simplifying the term, super sampling means that we split every pixel into multiple pixels to compute a more detailed image and then display that to the user. And does this work? Yes, it does, it works wonderfully, but it requires a lot more memory and computation, therefore, it is generally quite expensive. So, our question today is, is it possible to use these amazing learning-based algorithms to do it a little smarter? Let's have a look at some results from a recent paper that uses a neural network to perform super sampling at a more reasonable computational cost. Now, in goes the low resolution input and my goodness, like magic, outcomes this wonderful, much more detailed result. And here is the reference, which is the true higher resolution image. Of course, the closer the neural super sampling is to this, the better. And as you see, this is indeed really close and much better than the pixelated inputs. Let's do one more. Wow! This is so close, I feel we are just a couple papers away from a result being indistinguishable from the real reference image. Now, we noted that this previous method has access to more information than the previously showcased super resolution method. It looks at not just one frame, but a few previous frames as well. Can use an estimation of the motion of each pixel over time and also gets depth information. This can be typically produced inexpensively with any major game engine. So how much of this data is required to train this neural network? Hold on to your papers because 80 videos were used and the training took approximately one and a half days on one Titan V, which is an expensive but commercially available graphics card. And no matter because this step only has to be done once and depending on how high the resolution of the output should be with the faster version of the technique, the up sampling step takes from 8 to 18 milliseconds, so this runs easily in real time. Now of course this is not the only modern super sampling method, this topic is subject to a great deal of research, so let's see how it compares to others. Here you see the results with TAAU, the temporal up sampling technique used in Unreal Engine and industry standard game engine. And look, this neural super sampler is significantly better at anti-aliasing or in other words, smoothing these jagged edges and not only that, but it also resolves many more of the intricate details of the image. Temporal coherence has also improved a great deal as you see the video output is much smoother for the new method. This paper also contains a pond more comparisons against recent methods, so make sure to have a look. Like many of us, I would love to see a comparison against Nvidia's DLSS solution, but I haven't been able to find a published paper on the later versions of this method. I remain very excited about seeing that too. And for now, the future of video game visuals and other real time graphics applications is looking as excited as it's ever been. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their system to explore what exact code changes were made between two machine learning experiments. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. The best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karojejona Ifehir."}, {"start": 5.0, "end": 7.72, "text": " Let's talk about video super resolution."}, {"start": 7.72, "end": 13.16, "text": " The problem statement is simple, in goes a course video, the technique analyzes it, guesses"}, {"start": 13.16, "end": 17.32, "text": " what's missing, and out comes a detailed video."}, {"start": 17.32, "end": 23.16, "text": " You know the CSI thing, and hence, however, of course, reliably solving this problem is"}, {"start": 23.16, "end": 24.72, "text": " anything that simple."}, {"start": 24.72, "end": 29.72, "text": " This previous method is called TACOGAN and it is able to give us results that are often"}, {"start": 29.72, "end": 31.56, "text": " very close to reality."}, {"start": 31.56, "end": 36.0, "text": " It is truly amazing how much this technique understands the world around us from just"}, {"start": 36.0, "end": 39.6, "text": " this training set of low and high resolution videos."}, {"start": 39.6, "end": 44.96, "text": " However, as amazing as super resolution is, it is not the most reliable way of delivering"}, {"start": 44.96, "end": 50.44, "text": " high quality images in many real-time applications, for instance, video games."}, {"start": 50.44, "end": 56.599999999999994, "text": " Note that in these applications, we typically have more data at our disposal, but in return,"}, {"start": 56.599999999999994, "end": 59.2, "text": " the requirements are also higher."}, {"start": 59.2, "end": 65.32000000000001, "text": " We need high quality images, at least 60 times per second, temporal coherence is a necessity,"}, {"start": 65.32000000000001, "end": 70.12, "text": " or in other words, no jarring jumps and flickering is permitted."}, {"start": 70.12, "end": 75.52000000000001, "text": " And hence, one of the techniques often used in these cases is called super sampling."}, {"start": 75.52000000000001, "end": 80.64, "text": " At the risk of simplifying the term, super sampling means that we split every pixel into"}, {"start": 80.64, "end": 86.60000000000001, "text": " multiple pixels to compute a more detailed image and then display that to the user."}, {"start": 86.60000000000001, "end": 87.84, "text": " And does this work?"}, {"start": 87.84, "end": 94.44, "text": " Yes, it does, it works wonderfully, but it requires a lot more memory and computation,"}, {"start": 94.44, "end": 97.56, "text": " therefore, it is generally quite expensive."}, {"start": 97.56, "end": 103.24000000000001, "text": " So, our question today is, is it possible to use these amazing learning-based algorithms"}, {"start": 103.24000000000001, "end": 105.12, "text": " to do it a little smarter?"}, {"start": 105.12, "end": 109.64, "text": " Let's have a look at some results from a recent paper that uses a neural network to perform"}, {"start": 109.64, "end": 113.32000000000001, "text": " super sampling at a more reasonable computational cost."}, {"start": 113.32, "end": 121.03999999999999, "text": " Now, in goes the low resolution input and my goodness, like magic, outcomes this wonderful,"}, {"start": 121.03999999999999, "end": 122.96, "text": " much more detailed result."}, {"start": 122.96, "end": 127.11999999999999, "text": " And here is the reference, which is the true higher resolution image."}, {"start": 127.11999999999999, "end": 131.68, "text": " Of course, the closer the neural super sampling is to this, the better."}, {"start": 131.68, "end": 138.79999999999998, "text": " And as you see, this is indeed really close and much better than the pixelated inputs."}, {"start": 138.79999999999998, "end": 140.4, "text": " Let's do one more."}, {"start": 140.4, "end": 141.92, "text": " Wow!"}, {"start": 141.92, "end": 148.28, "text": " This is so close, I feel we are just a couple papers away from a result being indistinguishable"}, {"start": 148.28, "end": 150.16, "text": " from the real reference image."}, {"start": 150.16, "end": 155.07999999999998, "text": " Now, we noted that this previous method has access to more information than the previously"}, {"start": 155.07999999999998, "end": 157.67999999999998, "text": " showcased super resolution method."}, {"start": 157.67999999999998, "end": 162.72, "text": " It looks at not just one frame, but a few previous frames as well."}, {"start": 162.72, "end": 169.83999999999997, "text": " Can use an estimation of the motion of each pixel over time and also gets depth information."}, {"start": 169.84, "end": 174.56, "text": " This can be typically produced inexpensively with any major game engine."}, {"start": 174.56, "end": 179.12, "text": " So how much of this data is required to train this neural network?"}, {"start": 179.12, "end": 184.08, "text": " Hold on to your papers because 80 videos were used and the training took approximately"}, {"start": 184.08, "end": 190.3, "text": " one and a half days on one Titan V, which is an expensive but commercially available graphics"}, {"start": 190.3, "end": 191.3, "text": " card."}, {"start": 191.3, "end": 196.48000000000002, "text": " And no matter because this step only has to be done once and depending on how high the"}, {"start": 196.48, "end": 201.07999999999998, "text": " resolution of the output should be with the faster version of the technique, the up sampling"}, {"start": 201.07999999999998, "end": 207.88, "text": " step takes from 8 to 18 milliseconds, so this runs easily in real time."}, {"start": 207.88, "end": 212.51999999999998, "text": " Now of course this is not the only modern super sampling method, this topic is subject"}, {"start": 212.51999999999998, "end": 218.04, "text": " to a great deal of research, so let's see how it compares to others."}, {"start": 218.04, "end": 223.76, "text": " Here you see the results with TAAU, the temporal up sampling technique used in Unreal Engine"}, {"start": 223.76, "end": 226.39999999999998, "text": " and industry standard game engine."}, {"start": 226.39999999999998, "end": 233.51999999999998, "text": " And look, this neural super sampler is significantly better at anti-aliasing or in other words,"}, {"start": 233.51999999999998, "end": 239.95999999999998, "text": " smoothing these jagged edges and not only that, but it also resolves many more of the intricate"}, {"start": 239.95999999999998, "end": 243.56, "text": " details of the image."}, {"start": 243.56, "end": 248.56, "text": " Temporal coherence has also improved a great deal as you see the video output is much"}, {"start": 248.56, "end": 251.04, "text": " smoother for the new method."}, {"start": 251.04, "end": 256.4, "text": " This paper also contains a pond more comparisons against recent methods, so make sure to have"}, {"start": 256.4, "end": 257.56, "text": " a look."}, {"start": 257.56, "end": 263.08, "text": " Like many of us, I would love to see a comparison against Nvidia's DLSS solution, but I haven't"}, {"start": 263.08, "end": 267.12, "text": " been able to find a published paper on the later versions of this method."}, {"start": 267.12, "end": 270.44, "text": " I remain very excited about seeing that too."}, {"start": 270.44, "end": 275.4, "text": " And for now, the future of video game visuals and other real time graphics applications"}, {"start": 275.4, "end": 278.88, "text": " is looking as excited as it's ever been."}, {"start": 278.88, "end": 280.56, "text": " What a time to be alive."}, {"start": 280.56, "end": 283.64, "text": " This episode has been supported by weights and biases."}, {"start": 283.64, "end": 288.68, "text": " In this post, they show you how to use their system to explore what exact code changes"}, {"start": 288.68, "end": 291.88, "text": " were made between two machine learning experiments."}, {"start": 291.88, "end": 296.52, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 296.52, "end": 301.32, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 301.32, "end": 308.04, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 308.04, "end": 313.12, "text": " The best part is that if you have an open source, academic or personal project, you can"}, {"start": 313.12, "end": 315.08000000000004, "text": " use their tools for free."}, {"start": 315.08000000000004, "end": 317.6, "text": " It really is as good as it gets."}, {"start": 317.6, "end": 323.72, "text": " Make sure to visit them through wnb.com slash papers or click the link in the video description"}, {"start": 323.72, "end": 327.04, "text": " to start tracking your experiments in five minutes."}, {"start": 327.04, "end": 331.6, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 331.6, "end": 332.84000000000003, "text": " better videos for you."}, {"start": 332.84, "end": 338.84, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=XrOTgZ14fJg
This AI Can Deal With Body Shape Variation!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned instrumentation is available here: https://app.wandb.ai/lavanyashukla/cnndetection/reports/Detecting-CNN-Generated-Images--Vmlldzo2MTU1Mw 📝 The paper "Learning Body Shape Variation in Physics-based Characters" is available here: http://mrl.snu.ac.kr/publications/ProjectMorphCon/MorphCon.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajon Aifahir. This glorious paper from about 7 years ago was about teaching digital creatures to walk, the numbers here showcase the process of learning over time and it is clear that the later generations did much better than the earlier ones. These control algorithms are not only able to teach these creatures to walk, but they are quite robust against perturbations as well, or more simply put, we can engage in one of the favorite pastimes of a computer graphics researcher, which is, of course, throwing boxes at a character and seeing how well it can take it. This one has done really well. Well, kind of. Now we just noted that this is a computer graphics paper and an amazing one at that, but it does not yet use these incredible new machine learning techniques that just keep getting better year by year. To see these agents could learn to inhabit a given body, but I am wondering what would happen if we would suddenly change their bodies on the fly. Could previous methods handle it? Unfortunately, not really, and I think it is fair to say that an intelligent agent should have the ability to adapt when something changes. Therefore, our next question is how far have we come in these seven years? Can these new machine learning methods help us create a more general agent that could control not just one body, but a variety of different bodies? Let's have a look at today's paper and with that, let the fun begin. This initial agent was blessed with reasonable body proportions, but of course, we can't just leave it like that. Yes, much better. And look, all of these combinations can still work properly and all of them use the same one reinforcement learning algorithm. And do not think for a second that this is where the fun ends. No, no, no. Now, hold on to your papers and let's engage in these horrific asymmetric changes. There is no way that the same algorithm could be given this body and still be able to walk. Goodness, look at that. It is indeed still able to walk. If you have been holding on to your papers, good. Now, squeeze that paper because after adjusting the height, it could still not only walk, but even dance. But it goes further. Do you remember the crazy asymmetric experiment for the legs? Let's do something like that with thickness. And as a result, they can still not only walk, but even perform gymnastic moves. Woohoo! Now it's great that one algorithm can adapt to all of these body shapes, but it would be reasonable to ask how much do we have to wait for it to adapt? Have a look here. Are you seeing what I am seeing? We can make changes to the body on the fly and the AI adapts to it immediately. No retraining or parameter tuning is required. And that is the point where I fell off the chair when I read this paper. What a time to be alive. And now, scholars bring in the boxes. Haha! It can also inhabit dogs and fish, and we can also have some fun with them as we grab a controller and control them in real time. The technique is also very efficient as it requires very little memory and computation, not to mention that we only have to train one controller for many body types instead of always retraining after each change to the body. However, of course, this algorithm isn't perfect. One of its key limitations is that it will not do well if the body shapes we are producing stray too far away from the ones contained in the training set. But let's leave some space for the next follow-up paper too. And just one more thing that didn't quite fit into this story. Every now and then, I get these heartwarming messages from you fellow scholars noting that you've been watching the series for a while and decided to turn your lives around and go back to study more and improve. Good work, Mr. Moonat. That is absolutely amazing and reading these messages are a true delight to me. Please, keep them coming. What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.96, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajon Aifahir."}, {"start": 4.96, "end": 10.52, "text": " This glorious paper from about 7 years ago was about teaching digital creatures to walk,"}, {"start": 10.52, "end": 15.68, "text": " the numbers here showcase the process of learning over time and it is clear that the later"}, {"start": 15.68, "end": 19.36, "text": " generations did much better than the earlier ones."}, {"start": 19.36, "end": 23.8, "text": " These control algorithms are not only able to teach these creatures to walk, but they"}, {"start": 23.8, "end": 29.44, "text": " are quite robust against perturbations as well, or more simply put, we can engage in one"}, {"start": 29.44, "end": 35.2, "text": " of the favorite pastimes of a computer graphics researcher, which is, of course, throwing boxes"}, {"start": 35.2, "end": 39.08, "text": " at a character and seeing how well it can take it."}, {"start": 39.08, "end": 41.6, "text": " This one has done really well."}, {"start": 41.6, "end": 44.0, "text": " Well, kind of."}, {"start": 44.0, "end": 50.0, "text": " Now we just noted that this is a computer graphics paper and an amazing one at that, but it does"}, {"start": 50.0, "end": 54.44, "text": " not yet use these incredible new machine learning techniques that just keep getting"}, {"start": 54.44, "end": 56.6, "text": " better year by year."}, {"start": 56.6, "end": 61.800000000000004, "text": " To see these agents could learn to inhabit a given body, but I am wondering what would"}, {"start": 61.800000000000004, "end": 66.16, "text": " happen if we would suddenly change their bodies on the fly."}, {"start": 66.16, "end": 68.4, "text": " Could previous methods handle it?"}, {"start": 68.4, "end": 73.6, "text": " Unfortunately, not really, and I think it is fair to say that an intelligent agent should"}, {"start": 73.6, "end": 76.88, "text": " have the ability to adapt when something changes."}, {"start": 76.88, "end": 82.0, "text": " Therefore, our next question is how far have we come in these seven years?"}, {"start": 82.0, "end": 86.08, "text": " Can these new machine learning methods help us create a more general agent that could"}, {"start": 86.08, "end": 91.4, "text": " control not just one body, but a variety of different bodies?"}, {"start": 91.4, "end": 95.72, "text": " Let's have a look at today's paper and with that, let the fun begin."}, {"start": 95.72, "end": 100.67999999999999, "text": " This initial agent was blessed with reasonable body proportions, but of course, we can't"}, {"start": 100.67999999999999, "end": 102.4, "text": " just leave it like that."}, {"start": 102.4, "end": 105.2, "text": " Yes, much better."}, {"start": 105.2, "end": 111.24, "text": " And look, all of these combinations can still work properly and all of them use the same"}, {"start": 111.24, "end": 114.44, "text": " one reinforcement learning algorithm."}, {"start": 114.44, "end": 118.52, "text": " And do not think for a second that this is where the fun ends."}, {"start": 118.52, "end": 119.52, "text": " No, no, no."}, {"start": 119.52, "end": 125.56, "text": " Now, hold on to your papers and let's engage in these horrific asymmetric changes."}, {"start": 125.56, "end": 132.52, "text": " There is no way that the same algorithm could be given this body and still be able to walk."}, {"start": 132.52, "end": 134.4, "text": " Goodness, look at that."}, {"start": 134.4, "end": 137.2, "text": " It is indeed still able to walk."}, {"start": 137.2, "end": 139.72, "text": " If you have been holding on to your papers, good."}, {"start": 139.72, "end": 146.44, "text": " Now, squeeze that paper because after adjusting the height, it could still not only walk, but"}, {"start": 146.44, "end": 148.8, "text": " even dance."}, {"start": 148.8, "end": 150.4, "text": " But it goes further."}, {"start": 150.4, "end": 154.68, "text": " Do you remember the crazy asymmetric experiment for the legs?"}, {"start": 154.68, "end": 158.6, "text": " Let's do something like that with thickness."}, {"start": 158.6, "end": 164.52, "text": " And as a result, they can still not only walk, but even perform gymnastic moves."}, {"start": 164.52, "end": 165.52, "text": " Woohoo!"}, {"start": 165.52, "end": 170.36, "text": " Now it's great that one algorithm can adapt to all of these body shapes, but it would be"}, {"start": 170.36, "end": 174.60000000000002, "text": " reasonable to ask how much do we have to wait for it to adapt?"}, {"start": 174.60000000000002, "end": 176.72, "text": " Have a look here."}, {"start": 176.72, "end": 179.12, "text": " Are you seeing what I am seeing?"}, {"start": 179.12, "end": 184.56, "text": " We can make changes to the body on the fly and the AI adapts to it immediately."}, {"start": 184.56, "end": 187.88, "text": " No retraining or parameter tuning is required."}, {"start": 187.88, "end": 191.96, "text": " And that is the point where I fell off the chair when I read this paper."}, {"start": 191.96, "end": 193.96, "text": " What a time to be alive."}, {"start": 193.96, "end": 197.52, "text": " And now, scholars bring in the boxes."}, {"start": 197.52, "end": 199.52, "text": " Haha!"}, {"start": 199.52, "end": 205.36, "text": " It can also inhabit dogs and fish, and we can also have some fun with them as we grab a controller"}, {"start": 205.36, "end": 207.92000000000002, "text": " and control them in real time."}, {"start": 207.92000000000002, "end": 213.36, "text": " The technique is also very efficient as it requires very little memory and computation,"}, {"start": 213.36, "end": 218.76000000000002, "text": " not to mention that we only have to train one controller for many body types instead"}, {"start": 218.76000000000002, "end": 222.4, "text": " of always retraining after each change to the body."}, {"start": 222.4, "end": 225.88, "text": " However, of course, this algorithm isn't perfect."}, {"start": 225.88, "end": 230.68, "text": " One of its key limitations is that it will not do well if the body shapes we are producing"}, {"start": 230.68, "end": 235.08, "text": " stray too far away from the ones contained in the training set."}, {"start": 235.08, "end": 238.4, "text": " But let's leave some space for the next follow-up paper too."}, {"start": 238.4, "end": 242.24, "text": " And just one more thing that didn't quite fit into this story."}, {"start": 242.24, "end": 247.04000000000002, "text": " Every now and then, I get these heartwarming messages from you fellow scholars noting that"}, {"start": 247.04000000000002, "end": 251.64000000000001, "text": " you've been watching the series for a while and decided to turn your lives around and"}, {"start": 251.64, "end": 254.44, "text": " go back to study more and improve."}, {"start": 254.44, "end": 255.76, "text": " Good work, Mr. Moonat."}, {"start": 255.76, "end": 260.56, "text": " That is absolutely amazing and reading these messages are a true delight to me."}, {"start": 260.56, "end": 262.56, "text": " Please, keep them coming."}, {"start": 262.56, "end": 267.03999999999996, "text": " What you see here is an instrumentation for a previous paper that we covered in this"}, {"start": 267.03999999999996, "end": 270.32, "text": " series which was made by weights and biases."}, {"start": 270.32, "end": 275.8, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 275.8, "end": 280.4, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 280.4, "end": 285.2, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 285.2, "end": 291.91999999999996, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 291.91999999999996, "end": 296.88, "text": " And the best part is that if you have an open source, academic or personal project,"}, {"start": 296.88, "end": 299.0, "text": " you can use their tools for free."}, {"start": 299.0, "end": 301.52, "text": " It really is as good as it gets."}, {"start": 301.52, "end": 307.64, "text": " Make sure to visit them through wnb.com slash papers or click the link in the video description"}, {"start": 307.64, "end": 310.96, "text": " to start tracking your experiments in 5 minutes."}, {"start": 310.96, "end": 315.52, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 315.52, "end": 316.84, "text": " better videos for you."}, {"start": 316.84, "end": 346.4, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Popg7ej4AUU
Beautiful Results From 30 Years Of Light Transport Simulation! ☀️
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Specular Manifold Sampling for Rendering High-Frequency Caustics and Glints" is available here: http://rgl.epfl.ch/publications/Zeltner2020Specular My rendering course is available here, and is free for everyone: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ Wish to see the spheres and the volumetric caustic scene in higher resolution? Check out our paper here - https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ The PostDoc call is available here - https://www.cg.tuwien.ac.at/news/2020-10-02-Lighting-Simulation-Architectural-Design-%E2%80%93-Post-Doc-Position Check out the following renderers: - Mitsuba: https://www.mitsuba-renderer.org/ - Blender's Cycles - https://www.blender.org/ - LuxCore - https://luxcorerender.org/ Credits: The test scenes use textures from CC0 Textures and cgbook-case, and are lit by environment maps courtesy of HDRI Havenand Paul Debevec. Kettle: Blend Swap user PrinterKiller. 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jolnai-Fehir. I have been yearning for a light transport paper and goodness. Was I ecstatic when reading this one? And by the end of the video, I hope you will be too. And now, we only have to go through just about 30 years of light transport research. As some of you know, if we immerse ourselves into the art of light transport simulations, we can use our computers to simulate millions and millions of light rays and calculate how they get absorbed or scattered off of our objects in a virtual scene. Initially, we start out with a really noisy image, and as we add more rays, the image gets clearer and clearer over time. The time it takes for these images to clean up depends on the complexity of the geometry and our material models, but it typically takes a while. This micro-planet scene mostly contains vegetation, which are map objects. These, we also refer to as diffuse materials, this typically converge very quickly. As you see, we get meaningful progress on the entirety of the image within the first two minutes of the rendering process. And remember, in light transport simulations, noise is public enemy number one. This used a technique called path tracing. Let's refer to it as the OK technique. And now, let's try to use path tracing, this OK technique to render this torus in a glass enclosure. This is the first two minutes of the rendering process, and it does not look anything like the previous scene. The previous one was looking pretty smooth after just two minutes, whereas here you see this is indeed looking very grim. We have lots of these fireflies, which will take us up to a few days of computation time to clean up, even if we have a modern, powerful machine. So, why did this happen? The reason for this is that there are tricky cases for specular light transport that take many, many millions, if not billions, of light rays to compute properly. Specular here means mirror-like materials, those can get tricky, and this torus that has been enclosed in there is also not doing too well. So, this was path tracing, the OK technique. Now, let's try a better technique called metropolis light transport. This method is the result of a decade of research and is much better in dealing with difficult scenes. This particular variant is a proud Hungarian algorithm by a scientist called Trobok Kelaeman, and his colleagues at the Technical University of Budapest. For instance, here is a snippet of our earlier paper on a similarly challenging scene. This is how the OK path tracer did, and in the same amount of time, this is what metropolis light transport, the better technique could do. This was a lot more efficient, so let's see how it does with the torus. Now that's unexpected. This is indeed a notoriously difficult scene to render, even for metropolis light transport, the better technique. As you see, the reflected light patterns that we also refer to as caustics on the floor are much cleaner, but the torus is still not giving up. Let's jump another 15 years of light transport research and use a technique that goes by the name manifold exploration. Let's call this the best technique. Wow! Look at how beautifully it improves the image. It is not only much cleaner, but also converges much more gracefully. It doesn't go from a noisy image to a slightly less noisy image, but almost immediately gives us a solid baseline and new, cleaned up paths also appear over time. This technique is from 2012, and it truly is mind-boggling how good it is. This technique is so difficult to understand and implement that to the best of my knowledge, the number of people who can and have implemented it properly is exactly one. And that one person is Van Sajakob, one of the best minds in the game, and believe it or not, he wrote this method as a PhD student in light transport research. And today, as a professor at EPF R Switzerland, he and his colleagues set out to create a technique that is as good as manifold exploration, the best technique but is much simpler. Well, good luck with that I thought when skimming through the paper. Let's see how it did. For instance, we have some caustics at the bottom of a pool of water, has expected lots of firefly noise with the OK past tracer, and now hold onto your papers, and here is the note technique. Just look at that. It can do so much better in the same amount of time. Let's also have a look at this scene with lots and lots of specular microgeometry or in other words, glints. This is also a nightmare to render. With the OK past tracer, we have lots of flickering from one frame to the next, and here you see the result with the note technique. Perfect. So, it is indeed possible to take the best technique, manifold exploration, and reimagine it in a way that ordinary humans can also implement. Huge congratulations to the authors of this work, that I think is a crown achievement in large transport research. And that's why I was ecstatic when I first read through this incredible paper. Make sure to have a look at the paper, and you will see how they borrowed a nice little trick from a recent work in nuclear physics to tackle this problem. The presentation of the paper and the talk video with the details is also brilliant, and I urge you to have a look at it in the video description. This whole thing got me so excited I was barely able to follow sleep for several days now. What a time to be alive. Now, while we look through some more results from the paper, if you feel a little stranded at home and are thinking that this light transport thing is pretty cool, I held a master-level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for a privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone. That's what I want. So, the course is available free of charge for everyone, no strings attached, so make sure to click in the video description to get started. We write a full-light simulation program from scratch and learn about physics, the world around us, and more. Also, note that my former PhD advisor, Michal Wimmer, is looking to hire a postdoctoral researcher in this area, which is an amazing opportunity to push this field forward. The link is available in the video description. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jolnai-Fehir."}, {"start": 4.32, "end": 7.92, "text": " I have been yearning for a light transport paper and goodness."}, {"start": 7.92, "end": 10.64, "text": " Was I ecstatic when reading this one?"}, {"start": 10.64, "end": 13.92, "text": " And by the end of the video, I hope you will be too."}, {"start": 13.92, "end": 19.6, "text": " And now, we only have to go through just about 30 years of light transport research."}, {"start": 19.6, "end": 24.8, "text": " As some of you know, if we immerse ourselves into the art of light transport simulations,"}, {"start": 24.8, "end": 28.88, "text": " we can use our computers to simulate millions and millions of light rays"}, {"start": 28.88, "end": 34.48, "text": " and calculate how they get absorbed or scattered off of our objects in a virtual scene."}, {"start": 34.48, "end": 37.6, "text": " Initially, we start out with a really noisy image,"}, {"start": 37.6, "end": 42.32, "text": " and as we add more rays, the image gets clearer and clearer over time."}, {"start": 42.32, "end": 47.44, "text": " The time it takes for these images to clean up depends on the complexity of the geometry"}, {"start": 47.44, "end": 51.599999999999994, "text": " and our material models, but it typically takes a while."}, {"start": 51.599999999999994, "end": 56.4, "text": " This micro-planet scene mostly contains vegetation, which are map objects."}, {"start": 56.4, "end": 59.44, "text": " These, we also refer to as diffuse materials,"}, {"start": 59.44, "end": 62.239999999999995, "text": " this typically converge very quickly."}, {"start": 62.239999999999995, "end": 65.84, "text": " As you see, we get meaningful progress on the entirety of the image"}, {"start": 65.84, "end": 68.64, "text": " within the first two minutes of the rendering process."}, {"start": 68.64, "end": 74.0, "text": " And remember, in light transport simulations, noise is public enemy number one."}, {"start": 74.0, "end": 76.8, "text": " This used a technique called path tracing."}, {"start": 76.8, "end": 79.68, "text": " Let's refer to it as the OK technique."}, {"start": 79.68, "end": 86.32, "text": " And now, let's try to use path tracing, this OK technique to render this torus in a glass enclosure."}, {"start": 86.32, "end": 89.19999999999999, "text": " This is the first two minutes of the rendering process,"}, {"start": 89.19999999999999, "end": 92.24, "text": " and it does not look anything like the previous scene."}, {"start": 92.24, "end": 96.0, "text": " The previous one was looking pretty smooth after just two minutes,"}, {"start": 96.0, "end": 100.16, "text": " whereas here you see this is indeed looking very grim."}, {"start": 100.16, "end": 106.0, "text": " We have lots of these fireflies, which will take us up to a few days of computation time to clean up,"}, {"start": 106.0, "end": 108.24, "text": " even if we have a modern, powerful machine."}, {"start": 109.03999999999999, "end": 111.11999999999999, "text": " So, why did this happen?"}, {"start": 111.11999999999999, "end": 115.28, "text": " The reason for this is that there are tricky cases for specular light transport"}, {"start": 115.28, "end": 120.96000000000001, "text": " that take many, many millions, if not billions, of light rays to compute properly."}, {"start": 120.96000000000001, "end": 123.92, "text": " Specular here means mirror-like materials,"}, {"start": 123.92, "end": 128.08, "text": " those can get tricky, and this torus that has been enclosed in there"}, {"start": 128.08, "end": 129.84, "text": " is also not doing too well."}, {"start": 130.48, "end": 134.0, "text": " So, this was path tracing, the OK technique."}, {"start": 134.0, "end": 138.64, "text": " Now, let's try a better technique called metropolis light transport."}, {"start": 138.64, "end": 141.52, "text": " This method is the result of a decade of research"}, {"start": 141.52, "end": 144.72, "text": " and is much better in dealing with difficult scenes."}, {"start": 144.72, "end": 147.92, "text": " This particular variant is a proud Hungarian algorithm"}, {"start": 147.92, "end": 150.16, "text": " by a scientist called Trobok Kelaeman,"}, {"start": 150.16, "end": 153.6, "text": " and his colleagues at the Technical University of Budapest."}, {"start": 153.6, "end": 156.4, "text": " For instance, here is a snippet of our earlier paper"}, {"start": 156.4, "end": 158.64, "text": " on a similarly challenging scene."}, {"start": 158.64, "end": 161.6, "text": " This is how the OK path tracer did,"}, {"start": 161.6, "end": 163.6, "text": " and in the same amount of time,"}, {"start": 163.6, "end": 166.16, "text": " this is what metropolis light transport,"}, {"start": 166.16, "end": 167.68, "text": " the better technique could do."}, {"start": 168.32, "end": 170.48, "text": " This was a lot more efficient,"}, {"start": 170.48, "end": 175.35999999999999, "text": " so let's see how it does with the torus."}, {"start": 175.35999999999999, "end": 177.12, "text": " Now that's unexpected."}, {"start": 177.12, "end": 180.48, "text": " This is indeed a notoriously difficult scene to render,"}, {"start": 180.48, "end": 184.23999999999998, "text": " even for metropolis light transport, the better technique."}, {"start": 184.23999999999998, "end": 186.32, "text": " As you see, the reflected light patterns"}, {"start": 186.32, "end": 190.72, "text": " that we also refer to as caustics on the floor are much cleaner,"}, {"start": 190.72, "end": 193.12, "text": " but the torus is still not giving up."}, {"start": 194.0, "end": 197.44, "text": " Let's jump another 15 years of light transport research"}, {"start": 197.44, "end": 201.2, "text": " and use a technique that goes by the name manifold exploration."}, {"start": 201.2, "end": 203.68, "text": " Let's call this the best technique."}, {"start": 205.52, "end": 206.0, "text": " Wow!"}, {"start": 206.8, "end": 209.28, "text": " Look at how beautifully it improves the image."}, {"start": 209.28, "end": 211.6, "text": " It is not only much cleaner,"}, {"start": 211.6, "end": 214.48, "text": " but also converges much more gracefully."}, {"start": 215.04, "end": 219.04, "text": " It doesn't go from a noisy image to a slightly less noisy image,"}, {"start": 219.04, "end": 222.8, "text": " but almost immediately gives us a solid baseline"}, {"start": 222.8, "end": 226.16, "text": " and new, cleaned up paths also appear over time."}, {"start": 226.16, "end": 228.88, "text": " This technique is from 2012,"}, {"start": 228.88, "end": 232.07999999999998, "text": " and it truly is mind-boggling how good it is."}, {"start": 232.07999999999998, "end": 235.68, "text": " This technique is so difficult to understand and implement"}, {"start": 235.68, "end": 237.35999999999999, "text": " that to the best of my knowledge,"}, {"start": 237.35999999999999, "end": 241.51999999999998, "text": " the number of people who can and have implemented it properly"}, {"start": 241.51999999999998, "end": 243.04, "text": " is exactly one."}, {"start": 243.04, "end": 246.07999999999998, "text": " And that one person is Van Sajakob,"}, {"start": 246.07999999999998, "end": 248.07999999999998, "text": " one of the best minds in the game,"}, {"start": 248.07999999999998, "end": 249.28, "text": " and believe it or not,"}, {"start": 249.28, "end": 253.2, "text": " he wrote this method as a PhD student in light transport research."}, {"start": 253.2, "end": 256.71999999999997, "text": " And today, as a professor at EPF R Switzerland,"}, {"start": 256.71999999999997, "end": 259.92, "text": " he and his colleagues set out to create a technique"}, {"start": 259.92, "end": 262.8, "text": " that is as good as manifold exploration,"}, {"start": 262.8, "end": 266.08, "text": " the best technique but is much simpler."}, {"start": 266.71999999999997, "end": 269.91999999999996, "text": " Well, good luck with that I thought when skimming through the paper."}, {"start": 270.56, "end": 272.0, "text": " Let's see how it did."}, {"start": 272.0, "end": 275.91999999999996, "text": " For instance, we have some caustics at the bottom of a pool of water,"}, {"start": 275.91999999999996, "end": 280.24, "text": " has expected lots of firefly noise with the OK past tracer,"}, {"start": 280.24, "end": 284.40000000000003, "text": " and now hold onto your papers, and here is the note technique."}, {"start": 285.12, "end": 286.72, "text": " Just look at that."}, {"start": 286.72, "end": 290.16, "text": " It can do so much better in the same amount of time."}, {"start": 290.8, "end": 295.6, "text": " Let's also have a look at this scene with lots and lots of specular microgeometry"}, {"start": 295.6, "end": 297.6, "text": " or in other words, glints."}, {"start": 298.08, "end": 300.48, "text": " This is also a nightmare to render."}, {"start": 300.48, "end": 304.88, "text": " With the OK past tracer, we have lots of flickering from one frame to the next,"}, {"start": 304.88, "end": 307.68, "text": " and here you see the result with the note technique."}, {"start": 308.40000000000003, "end": 308.88, "text": " Perfect."}, {"start": 308.88, "end": 312.56, "text": " So, it is indeed possible to take the best technique,"}, {"start": 312.56, "end": 318.48, "text": " manifold exploration, and reimagine it in a way that ordinary humans can also implement."}, {"start": 318.48, "end": 321.28, "text": " Huge congratulations to the authors of this work,"}, {"start": 321.28, "end": 325.28, "text": " that I think is a crown achievement in large transport research."}, {"start": 325.28, "end": 330.4, "text": " And that's why I was ecstatic when I first read through this incredible paper."}, {"start": 330.4, "end": 332.4, "text": " Make sure to have a look at the paper,"}, {"start": 332.4, "end": 336.71999999999997, "text": " and you will see how they borrowed a nice little trick from a recent work"}, {"start": 336.72, "end": 339.28000000000003, "text": " in nuclear physics to tackle this problem."}, {"start": 339.28000000000003, "end": 344.32000000000005, "text": " The presentation of the paper and the talk video with the details is also brilliant,"}, {"start": 344.32000000000005, "end": 347.12, "text": " and I urge you to have a look at it in the video description."}, {"start": 347.12, "end": 352.96000000000004, "text": " This whole thing got me so excited I was barely able to follow sleep for several days now."}, {"start": 352.96000000000004, "end": 354.56, "text": " What a time to be alive."}, {"start": 354.56, "end": 358.0, "text": " Now, while we look through some more results from the paper,"}, {"start": 358.0, "end": 363.76000000000005, "text": " if you feel a little stranded at home and are thinking that this light transport thing is pretty cool,"}, {"start": 363.76, "end": 368.96, "text": " I held a master-level course on this topic at the Technical University of Vienna."}, {"start": 368.96, "end": 372.48, "text": " Since I was always teaching it to a handful of motivated students,"}, {"start": 372.48, "end": 376.71999999999997, "text": " I thought that the teachings shouldn't only be available for a privileged few"}, {"start": 376.71999999999997, "end": 379.03999999999996, "text": " who can afford a college education,"}, {"start": 379.03999999999996, "end": 382.0, "text": " but the teachings should be available for everyone."}, {"start": 382.56, "end": 384.48, "text": " Free education for everyone."}, {"start": 384.48, "end": 385.76, "text": " That's what I want."}, {"start": 385.76, "end": 389.12, "text": " So, the course is available free of charge for everyone,"}, {"start": 389.12, "end": 393.36, "text": " no strings attached, so make sure to click in the video description to get started."}, {"start": 393.36, "end": 396.48, "text": " We write a full-light simulation program from scratch"}, {"start": 396.48, "end": 399.92, "text": " and learn about physics, the world around us, and more."}, {"start": 399.92, "end": 403.6, "text": " Also, note that my former PhD advisor, Michal Wimmer,"}, {"start": 403.6, "end": 407.04, "text": " is looking to hire a postdoctoral researcher in this area,"}, {"start": 407.04, "end": 410.72, "text": " which is an amazing opportunity to push this field forward."}, {"start": 410.72, "end": 413.36, "text": " The link is available in the video description."}, {"start": 413.36, "end": 416.88, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 416.88, "end": 420.48, "text": " If you're looking for inexpensive Cloud GPUs for AI,"}, {"start": 420.48, "end": 426.16, "text": " check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000,"}, {"start": 426.16, "end": 429.68, "text": " RTX 8000, and V100 instances,"}, {"start": 429.68, "end": 437.28000000000003, "text": " and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 437.28000000000003, "end": 442.64000000000004, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 442.64000000000004, "end": 447.12, "text": " Join researchers at organizations like Apple, MIT, and Caltech"}, {"start": 447.12, "end": 450.88, "text": " in using Lambda Cloud instances, workstations, or servers."}, {"start": 450.88, "end": 454.96, "text": " Make sure to go to LambdaLabs.com slash papers to sign up"}, {"start": 454.96, "end": 457.68, "text": " for one of their amazing GPU instances today."}, {"start": 457.68, "end": 460.4, "text": " Our thanks to Lambda for their long-standing support"}, {"start": 460.4, "end": 463.2, "text": " and for helping us make better videos for you."}, {"start": 463.2, "end": 465.36, "text": " Thanks for watching and for your generous support,"}, {"start": 465.36, "end": 478.32, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UiEaWkf3r9A
AI-Based Style Transfer For Video…Now in Real Time!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/stacey/yolo-drive/reports/Bounding-Boxes-for-Object-Detection--Vmlldzo4Nzg4MQ 📝 The paper "Interactive Video Stylization Using Few-Shot Patch-Based Training" is available here: https://ondrejtexler.github.io/patch-based_training/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karajon Aifahir. Style Transfer is an interesting problem in machine learning research where we have two input images, one for content and one for style and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to super fun and really good looking results. We have seen plenty of papers doing variations of style transfer, but I always wonder, can we push this concept further? And the answer is yes. For instance, few people know that style transfer can also be done for video. First, we record a video with our camera, then take a still image from the video and apply our artistic style to it. Then our style will be applied to the entirety of the video. The main advantage of this new method compared to previous ones is that they either take too long or we have to run an expensive pre-training step. With this new one, we can just start drawing and see the output results right away. But it gets even better. Due to the interactive nature of this new technique, we can even do this live. All we need to do is change our input drawing and it transfers the new style to the video as fast as we can draw. This way, we can refine our input style for as long as we wish, or until we find the perfect way to stylize the video. And there is even more. If this works interactively, then it has to be able to offer an amazing workflow where we can capture a video of ourselves live and mark it up as we go. Let's see. Oh wow, just look at that. It is great to see that this new method also retains temporal consistency over a long time frame, which means that even if the marked up keyframe is from a long time ago, it can still be applied to the video and the outputs will show minimal flickering. And note that we can not only play with the colors, but with the geometry too. Look, we can warp the style image and it will be reflected in the output as well. I bet there is going to be a follow-up paper on more elaborate shape modifications as well. And this new work improves upon previous methods in even more areas. For instance, this is a method from just one year ago and here you see how it struggled with contour based styles. Here's a keyframe of the input video and here's the style that we wish to apply to it. Later, this method from last year seems to lose not only the contours, but a lot of visual detail is also gone. So, how did the new method do in this case? Look, it not only retains the contours better, but a lot more of the sharp details remain in the outputs. Amazing. Now note that this technique also comes with some limitations. For instance, there is still some temporal flickering in the outputs and in some cases separating the foreground and the background is challenging. But really, such incredible progress in just one year and I can only imagine what this method will be capable of two more papers down the line. What a time to be alive. Make sure to have a look at the paper in the video description and you will see many additional details. For instance, how you can just partially fill in some of the keyframes with your style and still get an excellent result. This episode has been supported by weights and biases. In this post, they show you how to test and explore putting bounding boxes around objects in your photos. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karajon Aifahir."}, {"start": 4.6000000000000005, "end": 10.44, "text": " Style Transfer is an interesting problem in machine learning research where we have two input images,"}, {"start": 10.44, "end": 17.28, "text": " one for content and one for style and the output is our content image reimagined with this new style."}, {"start": 17.28, "end": 21.68, "text": " The cool part is that the content can be a photo straight from our camera"}, {"start": 21.68, "end": 27.32, "text": " and the style can be a painting which leads to super fun and really good looking results."}, {"start": 27.32, "end": 32.56, "text": " We have seen plenty of papers doing variations of style transfer, but I always wonder,"}, {"start": 32.56, "end": 34.92, "text": " can we push this concept further?"}, {"start": 34.92, "end": 36.92, "text": " And the answer is yes."}, {"start": 36.92, "end": 41.84, "text": " For instance, few people know that style transfer can also be done for video."}, {"start": 41.84, "end": 46.92, "text": " First, we record a video with our camera, then take a still image from the video"}, {"start": 46.92, "end": 49.400000000000006, "text": " and apply our artistic style to it."}, {"start": 49.400000000000006, "end": 53.72, "text": " Then our style will be applied to the entirety of the video."}, {"start": 53.72, "end": 59.4, "text": " The main advantage of this new method compared to previous ones is that they either take too long"}, {"start": 59.4, "end": 62.879999999999995, "text": " or we have to run an expensive pre-training step."}, {"start": 62.879999999999995, "end": 67.8, "text": " With this new one, we can just start drawing and see the output results right away."}, {"start": 67.8, "end": 69.96000000000001, "text": " But it gets even better."}, {"start": 69.96000000000001, "end": 74.75999999999999, "text": " Due to the interactive nature of this new technique, we can even do this live."}, {"start": 74.75999999999999, "end": 82.03999999999999, "text": " All we need to do is change our input drawing and it transfers the new style to the video as fast as we can draw."}, {"start": 82.04, "end": 86.36000000000001, "text": " This way, we can refine our input style for as long as we wish,"}, {"start": 86.36000000000001, "end": 90.60000000000001, "text": " or until we find the perfect way to stylize the video."}, {"start": 90.60000000000001, "end": 92.60000000000001, "text": " And there is even more."}, {"start": 92.60000000000001, "end": 97.56, "text": " If this works interactively, then it has to be able to offer an amazing workflow"}, {"start": 97.56, "end": 103.24000000000001, "text": " where we can capture a video of ourselves live and mark it up as we go."}, {"start": 103.24000000000001, "end": 104.44000000000001, "text": " Let's see."}, {"start": 107.80000000000001, "end": 110.44000000000001, "text": " Oh wow, just look at that."}, {"start": 110.44, "end": 116.44, "text": " It is great to see that this new method also retains temporal consistency over a long time frame,"}, {"start": 116.44, "end": 120.6, "text": " which means that even if the marked up keyframe is from a long time ago,"}, {"start": 120.6, "end": 125.24, "text": " it can still be applied to the video and the outputs will show minimal flickering."}, {"start": 130.28, "end": 134.84, "text": " And note that we can not only play with the colors, but with the geometry too."}, {"start": 134.84, "end": 140.6, "text": " Look, we can warp the style image and it will be reflected in the output as well."}, {"start": 140.6, "end": 145.48000000000002, "text": " I bet there is going to be a follow-up paper on more elaborate shape modifications as well."}, {"start": 145.48000000000002, "end": 150.6, "text": " And this new work improves upon previous methods in even more areas."}, {"start": 150.6, "end": 157.88, "text": " For instance, this is a method from just one year ago and here you see how it struggled with contour based styles."}, {"start": 157.88, "end": 163.24, "text": " Here's a keyframe of the input video and here's the style that we wish to apply to it."}, {"start": 163.24, "end": 170.76000000000002, "text": " Later, this method from last year seems to lose not only the contours, but a lot of visual detail is also gone."}, {"start": 170.76000000000002, "end": 173.96, "text": " So, how did the new method do in this case?"}, {"start": 173.96, "end": 181.48000000000002, "text": " Look, it not only retains the contours better, but a lot more of the sharp details remain in the outputs."}, {"start": 181.48000000000002, "end": 182.60000000000002, "text": " Amazing."}, {"start": 182.60000000000002, "end": 186.52, "text": " Now note that this technique also comes with some limitations."}, {"start": 186.52, "end": 193.16000000000003, "text": " For instance, there is still some temporal flickering in the outputs and in some cases separating the foreground"}, {"start": 193.16, "end": 194.92, "text": " and the background is challenging."}, {"start": 195.56, "end": 202.76, "text": " But really, such incredible progress in just one year and I can only imagine what this method will be capable of"}, {"start": 202.76, "end": 204.2, "text": " two more papers down the line."}, {"start": 204.76, "end": 206.44, "text": " What a time to be alive."}, {"start": 206.44, "end": 211.8, "text": " Make sure to have a look at the paper in the video description and you will see many additional details."}, {"start": 211.8, "end": 216.92, "text": " For instance, how you can just partially fill in some of the keyframes with your style"}, {"start": 216.92, "end": 218.92, "text": " and still get an excellent result."}, {"start": 218.92, "end": 222.67999999999998, "text": " This episode has been supported by weights and biases."}, {"start": 222.67999999999998, "end": 228.67999999999998, "text": " In this post, they show you how to test and explore putting bounding boxes around objects in your photos."}, {"start": 228.67999999999998, "end": 233.72, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 233.72, "end": 239.64, "text": " Their system is designed to save you a ton of time and money and it is actively used in projects"}, {"start": 239.64, "end": 245.23999999999998, "text": " at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 245.24, "end": 250.36, "text": " And the best part is that if you have an open source, academic or personal project,"}, {"start": 250.36, "end": 252.44, "text": " you can use their tools for free."}, {"start": 252.44, "end": 254.92000000000002, "text": " It really is as good as it gets."}, {"start": 254.92000000000002, "end": 261.16, "text": " Make sure to visit them through wnb.com slash papers or click the link in the video description"}, {"start": 261.16, "end": 264.44, "text": " to start tracking your experiments in five minutes."}, {"start": 264.44, "end": 270.36, "text": " Our thanks to weights and biases for their long-standing support and for helping us make better videos for you."}, {"start": 270.36, "end": 277.96000000000004, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=JKe53bcyBQY
Elon Musk’s Neuralink Puts An AI Into Your Brain! 🧠
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/jack-morris/david-vs-goliath/reports/Does-model-size-matter%3F-A-comparison-of-BERT-and-DistilBERT--VmlldzoxMDUxNzU 📝 The paper "An integrated brain-machine interface platform with thousands of channels" is available here: https://www.biorxiv.org/content/10.1101/703801v2 Neuralink is hiring! Apply here: https://jobs.lever.co/neuralink 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #neuralink #elonmusk
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Due to popular requests, today we will talk about mural link, Elon Musk's Neural Engineering Company that he created to develop brain machine interfaces. And your first question likely is, why talk about mural link now? There was a recent event and another one last year as well, why did I not cover that? Well, the launch event from last year indeed promised a great deal. In this series, we often look at research works that are just one year apart and marvel at the difference scientists have been able to make in the tiny tiny time frame. So first, let's talk about their paper from 2019, which will be incredible. And then, see how far they have come in a year, which, as you will see, is even more incredible. The promise is to be able to read and write information too and from the brain. To accomplish this, as of 2019, they used this robot to insert the electrodes into your brain tissue. You can see the insertion process here. From the close-up image, you might think that this is a huge needle, but in fact, this needle is extremely tiny, you can see a penny for scale here. This is the story of how this rat got equipped with a USB port. As this process is almost like inserting microphones into its brain, now we are able to read the neural signals of this rat. Normally, these are analog signals which are read and digitized by neural links implant, and now this brain data is represented as a digital signal. Well, at first, this looks a bit like gibberish. Do we really have to undergo a brain surgery to get a bunch of these quickly curves? What do these really do for us? We can have the neural link chip analyze these signals and look for action potentials in them. These are also referred to as spikes because of their shape. That sounds a bit better, but still, what does this do for us? Let's see. Here we have a person with a mouse in their hand. This is an outward movement with a mouse, and then reaching back. Simple enough. Now, what you see here below is the activity of an example neuron. When nothing is happening, there is some firing, but not much activity, and now, look, when reaching out, this neuron fires a great deal, and suddenly, when reaching back again, nearly no activity. This means that this neuron is tuned for an outward motion, and this other one is tuned for the returning motion. And all this is now information that we can read in real time, and the more neurons we can read, the more complex motion we can read. Absolutely incredible. However, this is still a little difficult to read, so let's order them by what kind of motion makes them excited. And there we go. Suddenly, this is a much more organized way to present all this neural activity, and now we can detect what kind of motion the brain is thinking about. This was the reading part, and that's just the start. What is even cooler is that we can just invert this process, read this spiking activity, and just by looking at these, we can reconstruct the motion the human wishes to perform. With this, brain machine interfaces can be created for people with all kinds of disabilities, where the brain can still think about the movements, but the connection to the rest of the body is severed. Now, these people only have to think about moving, and then, the neuraling device will read it, and perform the cursor movement for them. It really feels like we live in a science fiction world. And all this signal processing is now possible automatically, and in real time, and all we need for this is this tiny, tiny chip that takes just a few square millimeters. And don't forget, that is just version 1 from 2019. Now, onwards to the 2020 event, where it gets even better. The neuraling device has been placed into Gertrude, the Pixbrain, and here you see it in action. We see the rest of you here, and luckily, we already know what it means, this lays bare the neural action potentials before our eyes, or, in other words, which neuron is spiking, and exactly when. Below, with blue, you see these activities summed up for our convenience, and this way, you will not only see, but here it too, as these neurons are tuned for snout boops. In other words, you will see, and here, that the more the snout is stimulated, the more neural activity it will show. Let's listen. And all this is possible today, and in real time. That was one of the highlights of the 2020 progress update event, but it went further. Much further. Look, this is a pig on a treadmill, and here you see the brain signal readings. This signal marked with a circle shows where a joint or limb is about to move, where the other dimmer colored signal is the chip's prediction, as to what is about to happen. It takes into consideration periodicity, and predicts higher frequency movement, like these sharp turns really well. The two are almost identical, and that means exactly what you think it means. Today, we can not only read and write, but even predict what the pig's brain is about to do. And that was the part where I fell off the chair when I watched this event live. You can also see the real and predicted world space positions for these body parts as well. Very close. Now note that there is a vast body of research in brain machine interfaces, and many of these things were possible in lab conditions, and Muralinx Quest here is to make them accessible for a wider audience within the next decade. If this project further improves at this rate, it could help many paralyzed people around the world live a longer and more meaningful life, and the neural enhancement aspect is also not out of question. Just thinking about summoning your Tesla might also summon it, which sounds like science fiction, and based on these results, you see that it may even be one of the simplest tasks for a neuraling chip in the future. And who knows, one day, maybe, with this device, these videos could be beamed into your brain much quicker, and this series would have to be renamed from two minute papers to two second papers, or maybe even two microsecond papers. They might actually fit into two minutes, like the title says, now that would truly be a miracle. Huge thanks to scientists at Muralinx for our discussions about the concepts described in this video, and then ensuring that you get accurate information. This is one of the reasons why our coverage of the 2020 event is way too late compared to many mainstream media outlets, which leads to a great deal less views for us, but it doesn't matter. We are not maximizing views here, we are maximizing learning. Note that they are also hiring, if you wish to be a part of their vision and work with them, make sure to apply. The link is available in the video description. This episode has been supported by weights and biases. In this post, they show you how to train and compare powerful, modern language models, such as Bert and distale Bert from the Hagenkface library. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers, or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 8.16, "text": " Due to popular requests, today we will talk about mural link,"}, {"start": 8.16, "end": 14.08, "text": " Elon Musk's Neural Engineering Company that he created to develop brain machine interfaces."}, {"start": 14.08, "end": 18.16, "text": " And your first question likely is, why talk about mural link now?"}, {"start": 18.16, "end": 23.76, "text": " There was a recent event and another one last year as well, why did I not cover that?"}, {"start": 23.76, "end": 28.0, "text": " Well, the launch event from last year indeed promised a great deal."}, {"start": 28.0, "end": 32.32, "text": " In this series, we often look at research works that are just one year apart"}, {"start": 32.32, "end": 38.480000000000004, "text": " and marvel at the difference scientists have been able to make in the tiny tiny time frame."}, {"start": 38.480000000000004, "end": 43.6, "text": " So first, let's talk about their paper from 2019, which will be incredible."}, {"start": 43.6, "end": 49.760000000000005, "text": " And then, see how far they have come in a year, which, as you will see, is even more incredible."}, {"start": 50.400000000000006, "end": 55.84, "text": " The promise is to be able to read and write information too and from the brain."}, {"start": 55.84, "end": 62.720000000000006, "text": " To accomplish this, as of 2019, they used this robot to insert the electrodes into your brain tissue."}, {"start": 62.720000000000006, "end": 64.96000000000001, "text": " You can see the insertion process here."}, {"start": 64.96000000000001, "end": 68.72, "text": " From the close-up image, you might think that this is a huge needle,"}, {"start": 68.72, "end": 73.44, "text": " but in fact, this needle is extremely tiny, you can see a penny for scale here."}, {"start": 74.24000000000001, "end": 78.56, "text": " This is the story of how this rat got equipped with a USB port."}, {"start": 78.56, "end": 82.56, "text": " As this process is almost like inserting microphones into its brain,"}, {"start": 82.56, "end": 86.08, "text": " now we are able to read the neural signals of this rat."}, {"start": 86.08, "end": 91.44, "text": " Normally, these are analog signals which are read and digitized by neural links implant,"}, {"start": 91.44, "end": 95.2, "text": " and now this brain data is represented as a digital signal."}, {"start": 95.76, "end": 99.12, "text": " Well, at first, this looks a bit like gibberish."}, {"start": 99.12, "end": 103.76, "text": " Do we really have to undergo a brain surgery to get a bunch of these quickly curves?"}, {"start": 103.76, "end": 105.68, "text": " What do these really do for us?"}, {"start": 105.68, "end": 111.52000000000001, "text": " We can have the neural link chip analyze these signals and look for action potentials in them."}, {"start": 111.52, "end": 114.56, "text": " These are also referred to as spikes because of their shape."}, {"start": 115.2, "end": 118.8, "text": " That sounds a bit better, but still, what does this do for us?"}, {"start": 119.36, "end": 122.96, "text": " Let's see. Here we have a person with a mouse in their hand."}, {"start": 123.52, "end": 128.56, "text": " This is an outward movement with a mouse, and then reaching back. Simple enough."}, {"start": 129.44, "end": 133.51999999999998, "text": " Now, what you see here below is the activity of an example neuron."}, {"start": 134.07999999999998, "end": 140.0, "text": " When nothing is happening, there is some firing, but not much activity, and now, look,"}, {"start": 140.0, "end": 145.76, "text": " when reaching out, this neuron fires a great deal, and suddenly, when reaching back again,"}, {"start": 145.76, "end": 151.28, "text": " nearly no activity. This means that this neuron is tuned for an outward motion,"}, {"start": 151.28, "end": 154.72, "text": " and this other one is tuned for the returning motion."}, {"start": 154.72, "end": 160.64, "text": " And all this is now information that we can read in real time, and the more neurons we can read,"}, {"start": 160.64, "end": 164.8, "text": " the more complex motion we can read. Absolutely incredible."}, {"start": 164.8, "end": 171.68, "text": " However, this is still a little difficult to read, so let's order them by what kind of motion makes them excited."}, {"start": 171.68, "end": 178.72, "text": " And there we go. Suddenly, this is a much more organized way to present all this neural activity,"}, {"start": 178.72, "end": 182.72000000000003, "text": " and now we can detect what kind of motion the brain is thinking about."}, {"start": 182.72000000000003, "end": 188.96, "text": " This was the reading part, and that's just the start. What is even cooler is that we can just invert"}, {"start": 188.96, "end": 194.88, "text": " this process, read this spiking activity, and just by looking at these, we can reconstruct the"}, {"start": 194.88, "end": 201.36, "text": " motion the human wishes to perform. With this, brain machine interfaces can be created for people"}, {"start": 201.36, "end": 205.84, "text": " with all kinds of disabilities, where the brain can still think about the movements,"}, {"start": 205.84, "end": 208.48000000000002, "text": " but the connection to the rest of the body is severed."}, {"start": 209.20000000000002, "end": 215.28, "text": " Now, these people only have to think about moving, and then, the neuraling device will read it,"}, {"start": 215.28, "end": 220.88, "text": " and perform the cursor movement for them. It really feels like we live in a science fiction world."}, {"start": 221.52, "end": 226.8, "text": " And all this signal processing is now possible automatically, and in real time,"}, {"start": 226.8, "end": 232.64, "text": " and all we need for this is this tiny, tiny chip that takes just a few square millimeters."}, {"start": 232.96, "end": 237.12, "text": " And don't forget, that is just version 1 from 2019."}, {"start": 238.0, "end": 242.72, "text": " Now, onwards to the 2020 event, where it gets even better."}, {"start": 242.72, "end": 248.88, "text": " The neuraling device has been placed into Gertrude, the Pixbrain, and here you see it in action."}, {"start": 248.88, "end": 253.04, "text": " We see the rest of you here, and luckily, we already know what it means,"}, {"start": 253.04, "end": 258.32, "text": " this lays bare the neural action potentials before our eyes, or, in other words,"}, {"start": 258.32, "end": 264.4, "text": " which neuron is spiking, and exactly when. Below, with blue, you see these activities"}, {"start": 264.4, "end": 269.76, "text": " summed up for our convenience, and this way, you will not only see, but here it too,"}, {"start": 269.76, "end": 275.68, "text": " as these neurons are tuned for snout boops. In other words, you will see, and here,"}, {"start": 275.68, "end": 279.59999999999997, "text": " that the more the snout is stimulated, the more neural activity it will show."}, {"start": 280.15999999999997, "end": 280.8, "text": " Let's listen."}, {"start": 289.44, "end": 296.24, "text": " And all this is possible today, and in real time. That was one of the highlights of the 2020 progress"}, {"start": 296.24, "end": 303.12, "text": " update event, but it went further. Much further. Look, this is a pig on a treadmill,"}, {"start": 303.12, "end": 309.52, "text": " and here you see the brain signal readings. This signal marked with a circle shows where a joint"}, {"start": 309.52, "end": 315.12, "text": " or limb is about to move, where the other dimmer colored signal is the chip's prediction,"}, {"start": 315.12, "end": 320.64, "text": " as to what is about to happen. It takes into consideration periodicity, and"}, {"start": 320.64, "end": 328.56, "text": " predicts higher frequency movement, like these sharp turns really well. The two are almost identical,"}, {"start": 328.56, "end": 334.71999999999997, "text": " and that means exactly what you think it means. Today, we can not only read and write,"}, {"start": 334.71999999999997, "end": 340.47999999999996, "text": " but even predict what the pig's brain is about to do. And that was the part where I fell off the"}, {"start": 340.47999999999996, "end": 346.24, "text": " chair when I watched this event live. You can also see the real and predicted world space positions"}, {"start": 346.24, "end": 354.16, "text": " for these body parts as well. Very close. Now note that there is a vast body of research"}, {"start": 354.16, "end": 359.36, "text": " in brain machine interfaces, and many of these things were possible in lab conditions,"}, {"start": 359.36, "end": 365.12, "text": " and Muralinx Quest here is to make them accessible for a wider audience within the next decade."}, {"start": 365.12, "end": 370.8, "text": " If this project further improves at this rate, it could help many paralyzed people around the world"}, {"start": 370.8, "end": 377.12, "text": " live a longer and more meaningful life, and the neural enhancement aspect is also not out of question."}, {"start": 377.68, "end": 383.44, "text": " Just thinking about summoning your Tesla might also summon it, which sounds like science fiction,"}, {"start": 383.44, "end": 388.40000000000003, "text": " and based on these results, you see that it may even be one of the simplest tasks for a"}, {"start": 388.40000000000003, "end": 394.48, "text": " neuraling chip in the future. And who knows, one day, maybe, with this device, these videos could"}, {"start": 394.48, "end": 399.6, "text": " be beamed into your brain much quicker, and this series would have to be renamed from two"}, {"start": 399.6, "end": 406.96000000000004, "text": " minute papers to two second papers, or maybe even two microsecond papers. They might actually fit"}, {"start": 406.96000000000004, "end": 412.24, "text": " into two minutes, like the title says, now that would truly be a miracle. Huge thanks to"}, {"start": 412.24, "end": 416.96000000000004, "text": " scientists at Muralinx for our discussions about the concepts described in this video, and then"}, {"start": 416.96000000000004, "end": 422.48, "text": " ensuring that you get accurate information. This is one of the reasons why our coverage of the 2020"}, {"start": 422.48, "end": 428.16, "text": " event is way too late compared to many mainstream media outlets, which leads to a great deal less"}, {"start": 428.16, "end": 434.40000000000003, "text": " views for us, but it doesn't matter. We are not maximizing views here, we are maximizing learning."}, {"start": 435.04, "end": 439.92, "text": " Note that they are also hiring, if you wish to be a part of their vision and work with them,"}, {"start": 439.92, "end": 444.72, "text": " make sure to apply. The link is available in the video description. This episode has been"}, {"start": 444.72, "end": 451.20000000000005, "text": " supported by weights and biases. In this post, they show you how to train and compare powerful,"}, {"start": 451.20000000000005, "end": 456.32000000000005, "text": " modern language models, such as Bert and distale Bert from the Hagenkface library."}, {"start": 456.32, "end": 460.96, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 460.96, "end": 466.8, "text": " Their system is designed to save you a ton of time and money, and it is actively used in projects"}, {"start": 466.8, "end": 473.84, "text": " at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if"}, {"start": 473.84, "end": 479.6, "text": " you have an open source, academic, or personal project, you can use their tools for free."}, {"start": 479.6, "end": 486.56, "text": " It really is as good as it gets. Make sure to visit them through wnbe.com slash papers, or"}, {"start": 486.56, "end": 491.52000000000004, "text": " click the link in the video description to start tracking your experiments in five minutes."}, {"start": 491.52000000000004, "end": 496.64000000000004, "text": " Our thanks to weights and biases for their long-standing support and for helping us make better"}, {"start": 496.64, "end": 510.47999999999996, "text": " videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=T29O-MhYALw
This AI Creates Real Scenes From Your Photos! 📷
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/sweep/nerf/reports/NeRF-%E2%80%93-Representing-Scenes-as-Neural-Radiance-Fields-for-View-Synthesis--Vmlldzo3ODIzMA 📝 The paper "NeRF in the Wild - Neural Radiance Fields for Unconstrained Photo Collections" is available here: https://nerf-w.github.io/ Photos by Flickr users dbowie78, vasnic64, punch, paradasos, itia4u, jblesa, joshheumann, ojotes, chyauchentravelworl, burkeandhare, photogreuhphie / CC BY 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Neyfahir. Approximately five months ago, we talked about a technique called Neural Radiance Fields or Nerf in short that worked on a 5D Neural Radiance Field representation. So, what does this mean exactly? What this means is that we have three dimensions for location and two for view direction or in short, the input is where we are in space, give it to a neural network to learn it and synthesize new previously unseen views of not just the materials in the scene, but the entire scene itself. In short, it can learn and reproduce entire real-world scenes from only a few views by using neural networks. And the results were just out of this world. Look, it could deal with many kinds of matte and glossy materials and even refractions worked quite well. It also understood depth so accurately that we could use it for augmented reality applications where we put a new virtual object in the scene and it correctly determined whether it is in front of or behind the real objects in the scene. However, not everything was perfect. In many cases, it had trouble with scenes with variable lighting conditions and lots of occluders. You might ask, is that really a problem? Well, imagine a use case of a tourist attraction that a lot of people take photos of and we then have a collection of photos taken during a different time of the day and of course with a lot of people around. But hey, remember that this means an application where we have exactly these conditions. A wide variety of illumination changes and occluders. This is exactly what Nerf was not too good at. Let's see how it did on such a case. Yes, we see both a drop changes in the illumination and the remnants of the fogs occluding the Brandenburg Gate as well. And this is where this new technique from scientists at Google Research by the name NerfW shines. It takes such a photo collection and tries to reconstruct the whole scene from it which we can again render from new viewpoints. So how well did the new method do in this case? Let's see. Wow, just look at how consistent those results are. So much improvement in just six months of time. This is unbelievable. This is how I did in a similar case with the Trevi fountain. Absolutely beautiful. And what is even more beautiful is that since it has variation in the viewpoint information, we can change these viewpoints around as the algorithm learned to reconstruct the scene itself. This is something that the original Nerf technique could also do, however, what it couldn't do is the same with illumination. Now we can also change the lighting conditions together with the viewpoint. This truly showcases a deep understanding of illumination and geometry. That is not trivial at all. For instance, while loading this scene into this mural re-rendering technique from last year, it couldn't tell whether we see just color variations on the same geometry or if the geometry itself is changing. And look, this new technique does much better on cases like this. So clean. Now that we have seen the images, let's see what the numbers say for these scenes. The NRW is the neural re-rendering technique we just saw and the other one is the Nerf paper from this year. The abbreviations show different ways of computing the output images, the up and down arrows show whether they are subject to maximization or minimization. They are both relatively close, but when we look at the new method, we see one of the rare cases where it wins decisively, regardless of what we are measuring. Incredible. This paper is truly a great leap in just a few months, but of course not everything is perfect here. The technique may fail to reconstruct regions that are only visible on just a few photos in the input dataset. The training still takes from hours to days. I take this as an interesting detail more than a limitation since the training only has to be done once and then using the technique can take place very quickly. But with that, there you go. A neural algorithm that understands lighting, geometry, can disentangle the tool, and reconstruct real-world scenes from just a few photos. It truly feels like we are living in a science fiction world. What a time to be alive. What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Neyfahir."}, {"start": 4.4, "end": 9.84, "text": " Approximately five months ago, we talked about a technique called Neural Radiance Fields"}, {"start": 9.84, "end": 15.84, "text": " or Nerf in short that worked on a 5D Neural Radiance Field representation."}, {"start": 15.84, "end": 18.080000000000002, "text": " So, what does this mean exactly?"}, {"start": 18.080000000000002, "end": 23.36, "text": " What this means is that we have three dimensions for location and two for view direction"}, {"start": 23.36, "end": 29.36, "text": " or in short, the input is where we are in space, give it to a neural network to learn it"}, {"start": 29.36, "end": 34.72, "text": " and synthesize new previously unseen views of not just the materials in the scene,"}, {"start": 34.72, "end": 37.28, "text": " but the entire scene itself."}, {"start": 37.28, "end": 43.84, "text": " In short, it can learn and reproduce entire real-world scenes from only a few views by using neural"}, {"start": 43.84, "end": 50.0, "text": " networks. And the results were just out of this world. Look, it could deal with many kinds of"}, {"start": 50.0, "end": 55.92, "text": " matte and glossy materials and even refractions worked quite well. It also understood"}, {"start": 55.92, "end": 62.160000000000004, "text": " depth so accurately that we could use it for augmented reality applications where we put a new"}, {"start": 62.160000000000004, "end": 68.24000000000001, "text": " virtual object in the scene and it correctly determined whether it is in front of or behind"}, {"start": 68.24000000000001, "end": 72.72, "text": " the real objects in the scene. However, not everything was perfect."}, {"start": 72.72, "end": 78.72, "text": " In many cases, it had trouble with scenes with variable lighting conditions and lots of occluders."}, {"start": 79.44, "end": 81.92, "text": " You might ask, is that really a problem?"}, {"start": 81.92, "end": 88.8, "text": " Well, imagine a use case of a tourist attraction that a lot of people take photos of and we then"}, {"start": 88.8, "end": 94.8, "text": " have a collection of photos taken during a different time of the day and of course with a lot of"}, {"start": 94.8, "end": 102.32000000000001, "text": " people around. But hey, remember that this means an application where we have exactly these conditions."}, {"start": 102.32000000000001, "end": 109.76, "text": " A wide variety of illumination changes and occluders. This is exactly what Nerf was not too good at."}, {"start": 109.76, "end": 117.60000000000001, "text": " Let's see how it did on such a case. Yes, we see both a drop changes in the illumination and the"}, {"start": 117.60000000000001, "end": 123.12, "text": " remnants of the fogs occluding the Brandenburg Gate as well. And this is where this new technique"}, {"start": 123.12, "end": 130.48000000000002, "text": " from scientists at Google Research by the name NerfW shines. It takes such a photo collection and tries"}, {"start": 130.48000000000002, "end": 137.44, "text": " to reconstruct the whole scene from it which we can again render from new viewpoints. So how well"}, {"start": 137.44, "end": 145.76, "text": " did the new method do in this case? Let's see. Wow, just look at how consistent those results are."}, {"start": 146.32, "end": 153.68, "text": " So much improvement in just six months of time. This is unbelievable. This is how I did in a similar"}, {"start": 153.68, "end": 160.24, "text": " case with the Trevi fountain. Absolutely beautiful. And what is even more beautiful is that since it has"}, {"start": 160.24, "end": 165.84, "text": " variation in the viewpoint information, we can change these viewpoints around as the algorithm"}, {"start": 165.84, "end": 171.04, "text": " learned to reconstruct the scene itself. This is something that the original Nerf technique could"}, {"start": 171.04, "end": 178.08, "text": " also do, however, what it couldn't do is the same with illumination. Now we can also change the"}, {"start": 178.08, "end": 184.72, "text": " lighting conditions together with the viewpoint. This truly showcases a deep understanding of illumination"}, {"start": 184.72, "end": 191.76, "text": " and geometry. That is not trivial at all. For instance, while loading this scene into this mural"}, {"start": 191.76, "end": 197.35999999999999, "text": " re-rendering technique from last year, it couldn't tell whether we see just color variations on the same"}, {"start": 197.35999999999999, "end": 205.2, "text": " geometry or if the geometry itself is changing. And look, this new technique does much better on"}, {"start": 205.2, "end": 213.2, "text": " cases like this. So clean. Now that we have seen the images, let's see what the numbers say for"}, {"start": 213.2, "end": 219.35999999999999, "text": " these scenes. The NRW is the neural re-rendering technique we just saw and the other one is the"}, {"start": 219.36, "end": 225.36, "text": " Nerf paper from this year. The abbreviations show different ways of computing the output images,"}, {"start": 225.36, "end": 231.68, "text": " the up and down arrows show whether they are subject to maximization or minimization. They are both"}, {"start": 231.68, "end": 237.84, "text": " relatively close, but when we look at the new method, we see one of the rare cases where it wins"}, {"start": 237.84, "end": 244.64000000000001, "text": " decisively, regardless of what we are measuring. Incredible. This paper is truly a great leap in"}, {"start": 244.64, "end": 250.48, "text": " just a few months, but of course not everything is perfect here. The technique may fail to reconstruct"}, {"start": 250.48, "end": 256.64, "text": " regions that are only visible on just a few photos in the input dataset. The training still takes"}, {"start": 256.64, "end": 262.4, "text": " from hours to days. I take this as an interesting detail more than a limitation since the training"}, {"start": 262.4, "end": 269.2, "text": " only has to be done once and then using the technique can take place very quickly. But with that,"}, {"start": 269.2, "end": 275.59999999999997, "text": " there you go. A neural algorithm that understands lighting, geometry, can disentangle the tool,"}, {"start": 275.59999999999997, "end": 281.92, "text": " and reconstruct real-world scenes from just a few photos. It truly feels like we are living in a"}, {"start": 281.92, "end": 287.59999999999997, "text": " science fiction world. What a time to be alive. What you see here is an instrumentation for a"}, {"start": 287.59999999999997, "end": 293.44, "text": " previous paper that we covered in this series which was made by weights and biases. I think"}, {"start": 293.44, "end": 298.08, "text": " organizing these experiments really showcases the usability of their system."}, {"start": 298.08, "end": 302.71999999999997, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 302.71999999999997, "end": 308.47999999999996, "text": " Their system is designed to save you a ton of time and money and it is actively used in projects"}, {"start": 308.47999999999996, "end": 315.68, "text": " at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you"}, {"start": 315.68, "end": 322.32, "text": " have an open source, academic, or personal project, you can use their tools for free. It really is"}, {"start": 322.32, "end": 329.12, "text": " as good as it gets. Make sure to visit them through wnbe.com slash papers or click the link in the"}, {"start": 329.12, "end": 334.88, "text": " video description to start tracking your experiments in five minutes. Our thanks to weights and biases"}, {"start": 334.88, "end": 340.15999999999997, "text": " for their long-standing support and for helping us make better videos for you. Thanks for watching"}, {"start": 340.16, "end": 355.04, "text": " and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=YCur6ir6wmw
AI Makes Video Game After Watching Tennis Matches!
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Vid2Player: Controllable Video Sprites that Behave and Appear like Professional Tennis Players" is available here: https://cs.stanford.edu/~haotianz/research/vid2player/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Approximately a year ago, we talked about an absolutely amazing paper by the name VittuGame, in which we could grab a controller and become video game characters. It was among the first introductory papers to tackle this problem, and in this series, we always say that two more papers down the line and it will be improved significantly. So let's see what's in store and this time just one more paper down the line. This new work offers an impressive value proposition, which is to transform a real tennis match into a realistic looking video game that is controllable. This includes synthesizing not only movements, but also what effect the movements have on the ball as well. So how do we control this? And now hold on to your papers because we can specify where the next shot would land with just one click. For instance, we can place this red dot here. And now just think about the fact that this doesn't just change where the ball should go, but the trajectory of the ball should be computed using a physical model and the kind of shot the tennis player has to perform for the resulting ball trajectory to look believable. This physical model even contains the ball's spin velocity and the magnus effect created by the spin. The entire chain of animations has to be correct and that's exactly what happens here. With blue, we can also specify the position the player has to await in to hit the ball next. And these virtual characters don't just look like their real counterparts, they also play like them. You see the authors analyze the playstyle of these athletes and build a heat map that contains information about their usual shot placements for the forehand and backhand shots separately, the average velocities of these shots and even their favored recovery positions. If you have a closer look at the paper, you will see that they not only include this kind of statistical knowledge into their system, but they really went the extra mile and included common tennis strategies as well. So how does it work? Let's look under the hood. First, it looks at broadcast footage from which annotated clips are extracted that contain the movement of these players. If you look carefully, you see this red line on the spine of the player and some more. These are annotations that tell the AI about the pose of the players. It builds a database from these clips and chooses the appropriate piece of footage for the action that is about to happen, which sounds great in theory, but in a moment you will see that this is not nearly enough to produce a believable animation. For instance, we also need a rendering step which has to adjust this footage to the appropriate perspective as you see here. But we have to do way more to make this work. Look, without additional considerations, we get something like this. Not good. So what happened here? Well, given the fact that the source data sets contain matches that are several hours long, they therefore contain many different lighting conditions. With this, visual glitches are practically guaranteed to happen. To address this, the paper describes a normalization step that can even these changes out. How well does this do? It's job. Let's have a look. This is the unnormalized case. This short sequence appears to contain at least four of these glitches, all of which are quite apparent. And now, let's see the new system after the normalization step. Yep, that's what I'm talking about. But these are not the only considerations the authors had to take to produce these amazing results. You see, oftentimes, quite a bit of information is missing from these frames. Our season fellow scholars know not to despair because we can reach out to image-in-painting methods to address this. These can fill in missing details in images with sensible information. You can see NVIDIA's work from two years ago that could do this reliably for a great variety of images. These new work uses a learning-based technique called image-to-image translation to fill in these details. Of course, the advantages of this new system are visible right away and so are its limitations. For instance, temporal coherence could be improved, meaning that the tennis records can appear or disappear from one frame to another. The sprites are not as detailed as they could be, but none of this really matters. What matters is that now, what's been previously impossible is now possible and two more papers down the line, it is very likely that all of these issues will be ironed out. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 10.84, "text": " Approximately a year ago, we talked about an absolutely amazing paper by the name VittuGame,"}, {"start": 10.84, "end": 15.120000000000001, "text": " in which we could grab a controller and become video game characters."}, {"start": 15.120000000000001, "end": 20.0, "text": " It was among the first introductory papers to tackle this problem, and in this series,"}, {"start": 20.0, "end": 25.32, "text": " we always say that two more papers down the line and it will be improved significantly."}, {"start": 25.32, "end": 30.84, "text": " So let's see what's in store and this time just one more paper down the line."}, {"start": 30.84, "end": 36.16, "text": " This new work offers an impressive value proposition, which is to transform a real tennis match"}, {"start": 36.16, "end": 40.44, "text": " into a realistic looking video game that is controllable."}, {"start": 40.44, "end": 45.6, "text": " This includes synthesizing not only movements, but also what effect the movements have on"}, {"start": 45.6, "end": 47.2, "text": " the ball as well."}, {"start": 47.2, "end": 49.36, "text": " So how do we control this?"}, {"start": 49.36, "end": 54.44, "text": " And now hold on to your papers because we can specify where the next shot would land"}, {"start": 54.44, "end": 56.519999999999996, "text": " with just one click."}, {"start": 56.519999999999996, "end": 59.28, "text": " For instance, we can place this red dot here."}, {"start": 59.28, "end": 64.72, "text": " And now just think about the fact that this doesn't just change where the ball should go,"}, {"start": 64.72, "end": 70.47999999999999, "text": " but the trajectory of the ball should be computed using a physical model and the kind of shot"}, {"start": 70.47999999999999, "end": 75.44, "text": " the tennis player has to perform for the resulting ball trajectory to look believable."}, {"start": 75.44, "end": 80.75999999999999, "text": " This physical model even contains the ball's spin velocity and the magnus effect created"}, {"start": 80.75999999999999, "end": 82.0, "text": " by the spin."}, {"start": 82.0, "end": 87.6, "text": " The entire chain of animations has to be correct and that's exactly what happens here."}, {"start": 87.6, "end": 92.4, "text": " With blue, we can also specify the position the player has to await in to hit the ball"}, {"start": 92.4, "end": 93.4, "text": " next."}, {"start": 93.4, "end": 98.4, "text": " And these virtual characters don't just look like their real counterparts, they also play"}, {"start": 98.4, "end": 99.68, "text": " like them."}, {"start": 99.68, "end": 104.92, "text": " You see the authors analyze the playstyle of these athletes and build a heat map that"}, {"start": 104.92, "end": 110.32, "text": " contains information about their usual shot placements for the forehand and backhand shots"}, {"start": 110.32, "end": 117.11999999999999, "text": " separately, the average velocities of these shots and even their favored recovery positions."}, {"start": 117.11999999999999, "end": 121.24, "text": " If you have a closer look at the paper, you will see that they not only include this kind"}, {"start": 121.24, "end": 126.0, "text": " of statistical knowledge into their system, but they really went the extra mile and included"}, {"start": 126.0, "end": 128.4, "text": " common tennis strategies as well."}, {"start": 128.4, "end": 130.48, "text": " So how does it work?"}, {"start": 130.48, "end": 132.0, "text": " Let's look under the hood."}, {"start": 132.0, "end": 137.35999999999999, "text": " First, it looks at broadcast footage from which annotated clips are extracted that contain"}, {"start": 137.35999999999999, "end": 139.51999999999998, "text": " the movement of these players."}, {"start": 139.52, "end": 144.48000000000002, "text": " If you look carefully, you see this red line on the spine of the player and some more."}, {"start": 144.48000000000002, "end": 148.8, "text": " These are annotations that tell the AI about the pose of the players."}, {"start": 148.8, "end": 153.16000000000003, "text": " It builds a database from these clips and chooses the appropriate piece of footage for the"}, {"start": 153.16000000000003, "end": 158.56, "text": " action that is about to happen, which sounds great in theory, but in a moment you will"}, {"start": 158.56, "end": 163.68, "text": " see that this is not nearly enough to produce a believable animation."}, {"start": 163.68, "end": 168.48000000000002, "text": " For instance, we also need a rendering step which has to adjust this footage to the appropriate"}, {"start": 168.48, "end": 170.48, "text": " perspective as you see here."}, {"start": 170.48, "end": 173.84, "text": " But we have to do way more to make this work."}, {"start": 173.84, "end": 179.12, "text": " Look, without additional considerations, we get something like this."}, {"start": 179.12, "end": 180.12, "text": " Not good."}, {"start": 180.12, "end": 182.32, "text": " So what happened here?"}, {"start": 182.32, "end": 187.44, "text": " Well, given the fact that the source data sets contain matches that are several hours"}, {"start": 187.44, "end": 191.51999999999998, "text": " long, they therefore contain many different lighting conditions."}, {"start": 191.51999999999998, "end": 195.67999999999998, "text": " With this, visual glitches are practically guaranteed to happen."}, {"start": 195.68, "end": 201.44, "text": " To address this, the paper describes a normalization step that can even these changes out."}, {"start": 201.44, "end": 202.64000000000001, "text": " How well does this do?"}, {"start": 202.64000000000001, "end": 203.64000000000001, "text": " It's job."}, {"start": 203.64000000000001, "end": 204.84, "text": " Let's have a look."}, {"start": 204.84, "end": 206.88, "text": " This is the unnormalized case."}, {"start": 206.88, "end": 212.12, "text": " This short sequence appears to contain at least four of these glitches, all of which are"}, {"start": 212.12, "end": 213.72, "text": " quite apparent."}, {"start": 213.72, "end": 218.88, "text": " And now, let's see the new system after the normalization step."}, {"start": 218.88, "end": 225.08, "text": " Yep, that's what I'm talking about."}, {"start": 225.08, "end": 229.76000000000002, "text": " But these are not the only considerations the authors had to take to produce these amazing"}, {"start": 229.76000000000002, "end": 230.76000000000002, "text": " results."}, {"start": 230.76000000000002, "end": 235.92000000000002, "text": " You see, oftentimes, quite a bit of information is missing from these frames."}, {"start": 235.92000000000002, "end": 241.44, "text": " Our season fellow scholars know not to despair because we can reach out to image-in-painting"}, {"start": 241.44, "end": 243.36, "text": " methods to address this."}, {"start": 243.36, "end": 247.72000000000003, "text": " These can fill in missing details in images with sensible information."}, {"start": 247.72000000000003, "end": 252.52, "text": " You can see NVIDIA's work from two years ago that could do this reliably for a great"}, {"start": 252.52, "end": 254.56, "text": " variety of images."}, {"start": 254.56, "end": 259.36, "text": " These new work uses a learning-based technique called image-to-image translation to fill"}, {"start": 259.36, "end": 261.08, "text": " in these details."}, {"start": 261.08, "end": 268.0, "text": " Of course, the advantages of this new system are visible right away and so are its limitations."}, {"start": 268.0, "end": 273.44, "text": " For instance, temporal coherence could be improved, meaning that the tennis records can appear"}, {"start": 273.44, "end": 276.56, "text": " or disappear from one frame to another."}, {"start": 276.56, "end": 281.52, "text": " The sprites are not as detailed as they could be, but none of this really matters."}, {"start": 281.52, "end": 287.12, "text": " What matters is that now, what's been previously impossible is now possible and two more papers"}, {"start": 287.12, "end": 291.91999999999996, "text": " down the line, it is very likely that all of these issues will be ironed out."}, {"start": 291.91999999999996, "end": 293.56, "text": " What a time to be alive!"}, {"start": 293.56, "end": 297.03999999999996, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 297.03999999999996, "end": 303.0, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 303.0, "end": 310.76, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto"}, {"start": 310.76, "end": 317.36, "text": " your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 317.36, "end": 322.64, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 322.64, "end": 329.28, "text": " Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 329.28, "end": 331.08, "text": " workstations or servers."}, {"start": 331.08, "end": 337.03999999999996, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 337.03999999999996, "end": 338.03999999999996, "text": " today."}, {"start": 338.04, "end": 342.76000000000005, "text": " Thanks to Lambda for their long-standing support and for helping us make better videos for"}, {"start": 342.76000000000005, "end": 343.76000000000005, "text": " you."}, {"start": 343.76, "end": 373.71999999999997, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=GniyQkgGlUA
Can An AI Create Original Art? 👨‍🎨
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their report on this paper is available here: https://app.wandb.ai/authors/rewrite-gan/reports/An-Overview-Rewriting-a-Deep-Generative-Model--VmlldzoyMzgyNTU 📝 The paper "Rewriting a Deep Generative Model" is available here: https://rewriting.csail.mit.edu/ Read the instructions carefully and try it here: https://colab.research.google.com/github/davidbau/rewriting/blob/master/notebooks/rewriting-interface.ipynb 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Here's fellow scholars, this is two-minute papers with Dr. Karri Zsolnai-Fehir approximately seven months ago, we discussed an AI-based technique called StyleGam2, which could synthesize images of human faces for us. As a result, none of the faces that you see here are real, all of them were generated by this technique. The quality of the images and the amount of detail they're in is truly stunning. We could also exert artistic control over these outputs by combining aspects of multiple faces together, and as the quality of these images improve over time, we think more and more about new questions to ask about them. And one of those questions is, for instance, how original are the outputs of these neural networks? Can these really make something truly unique? And believe it or not, this paper gives us a fairly good answer to that. One of the key ideas in this work is that in order to change the outputs, we have to change the model itself. Now that sounds a little nebulous, so let's have a look at an example. First, we choose a rule that we wish to change. In our case, this will be the towers. We can ask the algorithm to show us matches to this concept. And indeed, it highlights the towers on the images we haven't marked up yet, so it indeed understands what we meant. Then we highlight the tree as a goal, place it accordingly onto the water, and a few seconds later, there we go. The model has been reprogrammed such that instead of powers, it would make trees. Something original has emerged here, and look, not only on one image, but on multiple images at the same time. Now have a look at these human faces. By the way, none of them are real, and we're all synthesized by Stuyagen 2, the method you saw at the start of the video. Some of them do not appear to be too happy about the research progress in machine learning, but I am sure that this paper can put a smile on their faces. Let's select the ones that aren't too happy, then copy a big smile and paste it onto their faces. And see if it works. It does. Wow! Let's flick between the before and after images and see how well the changes are adapted to each of the target faces. Truly excellent work. And now on to eyebrows. Hold onto your papers while we choose a few of them. And now I hope you agree that this mustache would make a glorious replacement for them. And... There we go. Perfect. And note that with this, we are violating Betteridge's law of headlines again in this series because the answer to our central question is a resounding yes. These neural networks can indeed create truly original works, and what's more, even entire data sets that haven't existed before. Now at the start of the video, we noted that instead of editing images, it edits the neural networks model instead. If you look here, we have a set of input images created by a generator network. Then as we highlight concepts, for instance, the watermark text here, we can look for the weights that contain this information and rewrite the network to accommodate this user request in this case to remove these patterns. Now that they are gone by selecting humans, we can again rewrite the network weights to add more of them. And finally, the signature tree-trick from earlier can take place. The key here is that if we change one image, then we have a new and original image, but if we change the generator model itself, we can make thousands of new images in one go. Or even a full data set. And perhaps the trickiest part of this work is minimizing the effect on other weights while we reprogrammed the ones we wish to change. Of course, there will always be some collateral damage, but the results, in most cases, still seem to remain intact. Make sure to have a look at the paper to see how it's done exactly. Also, good news, the authors also provided an online notebook where you can try this technique yourself. If you do, make sure to read the instructions carefully and regardless of whether you get successes or failure cases, make sure to post them in the comments section here. In research, both are useful information. So, after the training step has taken place, neural networks can be rewired to make sure they create truly original works and all this on not one image, but on a mass scale. What a time to be alive! What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnba.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.32, "text": " Here's fellow scholars, this is two-minute papers with Dr. Karri Zsolnai-Fehir approximately"}, {"start": 5.32, "end": 11.28, "text": " seven months ago, we discussed an AI-based technique called StyleGam2, which could synthesize"}, {"start": 11.28, "end": 13.6, "text": " images of human faces for us."}, {"start": 13.6, "end": 18.68, "text": " As a result, none of the faces that you see here are real, all of them were generated"}, {"start": 18.68, "end": 20.12, "text": " by this technique."}, {"start": 20.12, "end": 25.12, "text": " The quality of the images and the amount of detail they're in is truly stunning."}, {"start": 25.12, "end": 30.240000000000002, "text": " We could also exert artistic control over these outputs by combining aspects of multiple"}, {"start": 30.240000000000002, "end": 35.72, "text": " faces together, and as the quality of these images improve over time, we think more and"}, {"start": 35.72, "end": 38.92, "text": " more about new questions to ask about them."}, {"start": 38.92, "end": 43.760000000000005, "text": " And one of those questions is, for instance, how original are the outputs of these neural"}, {"start": 43.760000000000005, "end": 44.760000000000005, "text": " networks?"}, {"start": 44.760000000000005, "end": 47.84, "text": " Can these really make something truly unique?"}, {"start": 47.84, "end": 52.08, "text": " And believe it or not, this paper gives us a fairly good answer to that."}, {"start": 52.08, "end": 56.56, "text": " One of the key ideas in this work is that in order to change the outputs, we have to"}, {"start": 56.56, "end": 58.6, "text": " change the model itself."}, {"start": 58.6, "end": 62.16, "text": " Now that sounds a little nebulous, so let's have a look at an example."}, {"start": 62.16, "end": 64.64, "text": " First, we choose a rule that we wish to change."}, {"start": 64.64, "end": 67.12, "text": " In our case, this will be the towers."}, {"start": 67.12, "end": 72.0, "text": " We can ask the algorithm to show us matches to this concept."}, {"start": 72.0, "end": 76.75999999999999, "text": " And indeed, it highlights the towers on the images we haven't marked up yet, so it"}, {"start": 76.75999999999999, "end": 79.56, "text": " indeed understands what we meant."}, {"start": 79.56, "end": 91.52000000000001, "text": " Then we highlight the tree as a goal, place it accordingly onto the water, and a few seconds"}, {"start": 91.52000000000001, "end": 95.16, "text": " later, there we go."}, {"start": 95.16, "end": 101.04, "text": " The model has been reprogrammed such that instead of powers, it would make trees."}, {"start": 101.04, "end": 106.52000000000001, "text": " Something original has emerged here, and look, not only on one image, but on multiple"}, {"start": 106.52, "end": 109.84, "text": " images at the same time."}, {"start": 109.84, "end": 112.32, "text": " Now have a look at these human faces."}, {"start": 112.32, "end": 117.08, "text": " By the way, none of them are real, and we're all synthesized by Stuyagen 2, the method"}, {"start": 117.08, "end": 119.39999999999999, "text": " you saw at the start of the video."}, {"start": 119.39999999999999, "end": 124.08, "text": " Some of them do not appear to be too happy about the research progress in machine learning,"}, {"start": 124.08, "end": 127.8, "text": " but I am sure that this paper can put a smile on their faces."}, {"start": 127.8, "end": 133.51999999999998, "text": " Let's select the ones that aren't too happy, then copy a big smile and paste it onto"}, {"start": 133.51999999999998, "end": 134.84, "text": " their faces."}, {"start": 134.84, "end": 138.56, "text": " And see if it works."}, {"start": 138.56, "end": 139.56, "text": " It does."}, {"start": 139.56, "end": 140.56, "text": " Wow!"}, {"start": 140.56, "end": 146.08, "text": " Let's flick between the before and after images and see how well the changes are adapted"}, {"start": 146.08, "end": 148.44, "text": " to each of the target faces."}, {"start": 148.44, "end": 152.8, "text": " Truly excellent work."}, {"start": 152.8, "end": 155.16, "text": " And now on to eyebrows."}, {"start": 155.16, "end": 160.16, "text": " Hold onto your papers while we choose a few of them."}, {"start": 160.16, "end": 167.12, "text": " And now I hope you agree that this mustache would make a glorious replacement for them."}, {"start": 167.12, "end": 168.12, "text": " And..."}, {"start": 168.12, "end": 170.4, "text": " There we go."}, {"start": 170.4, "end": 171.4, "text": " Perfect."}, {"start": 171.4, "end": 176.4, "text": " And note that with this, we are violating Betteridge's law of headlines again in this series"}, {"start": 176.4, "end": 180.4, "text": " because the answer to our central question is a resounding yes."}, {"start": 180.4, "end": 185.64, "text": " These neural networks can indeed create truly original works, and what's more, even"}, {"start": 185.64, "end": 189.16, "text": " entire data sets that haven't existed before."}, {"start": 189.16, "end": 194.16, "text": " Now at the start of the video, we noted that instead of editing images, it edits the"}, {"start": 194.16, "end": 196.52, "text": " neural networks model instead."}, {"start": 196.52, "end": 201.28, "text": " If you look here, we have a set of input images created by a generator network."}, {"start": 201.28, "end": 206.28, "text": " Then as we highlight concepts, for instance, the watermark text here, we can look for the"}, {"start": 206.28, "end": 211.16, "text": " weights that contain this information and rewrite the network to accommodate this user"}, {"start": 211.16, "end": 214.88, "text": " request in this case to remove these patterns."}, {"start": 214.88, "end": 219.84, "text": " Now that they are gone by selecting humans, we can again rewrite the network weights to"}, {"start": 219.84, "end": 223.2, "text": " add more of them."}, {"start": 223.2, "end": 228.12, "text": " And finally, the signature tree-trick from earlier can take place."}, {"start": 228.12, "end": 233.68, "text": " The key here is that if we change one image, then we have a new and original image, but"}, {"start": 233.68, "end": 240.04, "text": " if we change the generator model itself, we can make thousands of new images in one go."}, {"start": 240.04, "end": 242.32, "text": " Or even a full data set."}, {"start": 242.32, "end": 248.44, "text": " And perhaps the trickiest part of this work is minimizing the effect on other weights"}, {"start": 248.44, "end": 251.51999999999998, "text": " while we reprogrammed the ones we wish to change."}, {"start": 251.51999999999998, "end": 256.96, "text": " Of course, there will always be some collateral damage, but the results, in most cases, still"}, {"start": 256.96, "end": 259.0, "text": " seem to remain intact."}, {"start": 259.0, "end": 262.44, "text": " Make sure to have a look at the paper to see how it's done exactly."}, {"start": 262.44, "end": 267.44, "text": " Also, good news, the authors also provided an online notebook where you can try this"}, {"start": 267.44, "end": 269.03999999999996, "text": " technique yourself."}, {"start": 269.04, "end": 273.36, "text": " If you do, make sure to read the instructions carefully and regardless of whether you get"}, {"start": 273.36, "end": 278.24, "text": " successes or failure cases, make sure to post them in the comments section here."}, {"start": 278.24, "end": 281.16, "text": " In research, both are useful information."}, {"start": 281.16, "end": 286.52000000000004, "text": " So, after the training step has taken place, neural networks can be rewired to make sure"}, {"start": 286.52000000000004, "end": 293.52000000000004, "text": " they create truly original works and all this on not one image, but on a mass scale."}, {"start": 293.52000000000004, "end": 295.24, "text": " What a time to be alive!"}, {"start": 295.24, "end": 299.96000000000004, "text": " What you see here is an instrumentation of this exact paper we have talked about which"}, {"start": 299.96000000000004, "end": 302.68, "text": " was made by weights and biases."}, {"start": 302.68, "end": 307.24, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 307.24, "end": 312.04, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 312.04, "end": 318.76, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 318.76, "end": 323.72, "text": " And the best part is that if you have an open source, academic or personal project,"}, {"start": 323.72, "end": 325.84000000000003, "text": " you can use their tools for free."}, {"start": 325.84000000000003, "end": 328.36, "text": " It really is as good as it gets."}, {"start": 328.36, "end": 334.44000000000005, "text": " Make sure to visit them through wnba.com slash papers or click the link in the video description"}, {"start": 334.44000000000005, "end": 337.8, "text": " to start tracking your experiments in 5 minutes."}, {"start": 337.8, "end": 342.40000000000003, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 342.40000000000003, "end": 343.68, "text": " better videos for you."}, {"start": 343.68, "end": 371.32, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=bfuBQp1JmX8
Can We Simulate a Rocket Launch? 🚀
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Fast and Scalable Turbulent Flow Simulation with Two-Way Coupling" is available here: http://faculty.sist.shanghaitech.edu.cn/faculty/liuxp/projects/lbm-solid/index.htm Vishnu Menon’s wind tunnel test video: https://www.youtube.com/watch?v=_q6ozALzkF4 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Naifahir. In this series, we often talk about smoke and fluid simulations and sometimes the examples showcase a beautiful smoke plume but not much else. However, in real production environments, these simulations often involve complex scenes with many objects that interact with each other and therein lies the problem. Computing these interactions is called coupling and it is very difficult to get right, but it is necessary for many of the scenes you will see throughout this video. This new graphics paper builds on a technique called the lattice Boltzmann method and promises a better way to compute this two-way coupling. For instance, in this simulation, two-way coupling is required to compute how this fiery smoke trail propels the rocket upward. So, coupling means interaction between different kinds of objects, but what about the two-way part? What does that mean exactly? Well, first, let's have a look at one-way coupling. As the box moves here, it has an effect on the smoke plume around it. This example also showcases one-way coupling where the falling plate stirs up the smoke around it. The parts with the higher Reynolds numbers showcase more perbulent flows. Typically, that's the real good stuff if you ask me. And now onto two-way coupling. In this case, similarly to previous ones, the boxes are allowed to move the smoke, but the added two-way coupling part means that now the smoke is also allowed to blow away the boxes. What's more, the vertices here on the right were even able to suspend the red box in the air for a few seconds. An excellent demonstration of a beautiful phenomenon. Now, let's look at the previous example with the dropping plate and see what happens. Yes, indeed, as the plate drops, it moves the smoke, and as the smoke moves, it also blows away the boxes. Woohoo! Due to improvements in the coupling computation, it also simulates these kinds of vertices much more realistically than previous works. Just look at all this magnificent progress in just two years. So, what else can we do with all this? What are the typical scenarios that require accurate two-way coupling? Well, for instance, it can perform an incredible tornado simulation that you see here, and there is an alternative view where we only see the objects moving about. So all this looks good, but really, how do we know how accurate this technique is? And now comes my favorite part, and this is when we let reality be our judge and compare the simulation results with real-world experiments. Hold onto your papers while you observe the real experiment here on the left. And now, the algorithmic reproduction of the same scene here. How close are they? Goodness. Very, very close. I will stop the footage at different times so we can both evaluate it better. Love it. The technique can also undergo the wind tunnel test. Here is the real footage. And here is the simulation. And it is truly remarkable how close this is able to match it, and I was wondering that even though someone who has been doing fluids for a while now is someone cropped this part of the image and told me that it is real-world footage, I would have believed it in a second. Absolute insanity. So how much do we have to wait to compute a simulation like this? Well, great news. It uses your graphics card, which is typically the case for the more rudimentary fluid simulation algorithms out there, but the more elaborate ones typically don't support it, or at least not without a fair amount of additional effort. The quickest example was this as it was simulated in less than 6 seconds, which I find to be mind-blowing. The smoke simulation with box movements in a few seconds, I am truly out of words. The rocket launch scene took the longest with 16 hours, while the falling plate example with the strong draft that threw the boxes around with about 4.5 hours of computation time. The results depend greatly on Delta T, which is the size of the time steps, or in other words in how small increments we can advance time when creating these simulations to make sure we don't miss any important interactions. You see here that in the rocket example, we have to simulate roughly 100,000 steps for every second of video footage. No wonder it takes so long. We have an easier time with this scene, where these time steps can be 50 times larger without losing any detail, and hence it goes much faster. The great resolution also matters a great deal, which specifies how many spatial points the simulation has to take place in. The higher the resolution of the grid, the larger the region we can cover, and the more details we can simulate. As most research works, this technique doesn't come without limitations, however. It is less accurate if we have simulations involving thin rods and shells, and typically uses 2 to 3 times more memory than a typical simulator program. If these are the only trade-offs to create all this marvelous footage, sign me up this very second. Overall, this paper is extraordinarily well written, and of course it has been accepted to the SIGGRAPH conference, one of the most prestigious scientific venues in computer graphics research. Huge congratulations to the authors, and if you wish to see something beautiful today, make sure to have a look at the paper itself in the video description. Truly stunning work. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Naifahir."}, {"start": 4.76, "end": 10.200000000000001, "text": " In this series, we often talk about smoke and fluid simulations and sometimes the examples"}, {"start": 10.200000000000001, "end": 14.64, "text": " showcase a beautiful smoke plume but not much else."}, {"start": 14.64, "end": 19.72, "text": " However, in real production environments, these simulations often involve complex scenes"}, {"start": 19.72, "end": 25.28, "text": " with many objects that interact with each other and therein lies the problem."}, {"start": 25.28, "end": 29.92, "text": " Computing these interactions is called coupling and it is very difficult to get right, but"}, {"start": 29.92, "end": 33.92, "text": " it is necessary for many of the scenes you will see throughout this video."}, {"start": 33.92, "end": 39.08, "text": " This new graphics paper builds on a technique called the lattice Boltzmann method and promises"}, {"start": 39.08, "end": 42.2, "text": " a better way to compute this two-way coupling."}, {"start": 42.2, "end": 47.52, "text": " For instance, in this simulation, two-way coupling is required to compute how this fiery smoke"}, {"start": 47.52, "end": 50.28, "text": " trail propels the rocket upward."}, {"start": 50.28, "end": 56.120000000000005, "text": " So, coupling means interaction between different kinds of objects, but what about the two-way"}, {"start": 56.120000000000005, "end": 57.2, "text": " part?"}, {"start": 57.2, "end": 58.92, "text": " What does that mean exactly?"}, {"start": 58.92, "end": 62.56, "text": " Well, first, let's have a look at one-way coupling."}, {"start": 62.56, "end": 67.92, "text": " As the box moves here, it has an effect on the smoke plume around it."}, {"start": 67.92, "end": 73.44, "text": " This example also showcases one-way coupling where the falling plate stirs up the smoke around"}, {"start": 73.44, "end": 74.44, "text": " it."}, {"start": 74.44, "end": 78.64, "text": " The parts with the higher Reynolds numbers showcase more perbulent flows."}, {"start": 78.64, "end": 82.8, "text": " Typically, that's the real good stuff if you ask me."}, {"start": 82.8, "end": 85.04, "text": " And now onto two-way coupling."}, {"start": 85.04, "end": 90.84, "text": " In this case, similarly to previous ones, the boxes are allowed to move the smoke, but"}, {"start": 90.84, "end": 96.52000000000001, "text": " the added two-way coupling part means that now the smoke is also allowed to blow away"}, {"start": 96.52000000000001, "end": 98.0, "text": " the boxes."}, {"start": 98.0, "end": 102.68, "text": " What's more, the vertices here on the right were even able to suspend the red box in"}, {"start": 102.68, "end": 105.12, "text": " the air for a few seconds."}, {"start": 105.12, "end": 108.60000000000001, "text": " An excellent demonstration of a beautiful phenomenon."}, {"start": 108.6, "end": 115.36, "text": " Now, let's look at the previous example with the dropping plate and see what happens."}, {"start": 115.36, "end": 121.36, "text": " Yes, indeed, as the plate drops, it moves the smoke, and as the smoke moves, it also blows"}, {"start": 121.36, "end": 123.16, "text": " away the boxes."}, {"start": 123.16, "end": 125.84, "text": " Woohoo!"}, {"start": 125.84, "end": 130.84, "text": " Due to improvements in the coupling computation, it also simulates these kinds of vertices"}, {"start": 130.84, "end": 133.84, "text": " much more realistically than previous works."}, {"start": 133.84, "end": 137.84, "text": " Just look at all this magnificent progress in just two years."}, {"start": 137.84, "end": 140.84, "text": " So, what else can we do with all this?"}, {"start": 140.84, "end": 144.72, "text": " What are the typical scenarios that require accurate two-way coupling?"}, {"start": 144.72, "end": 152.08, "text": " Well, for instance, it can perform an incredible tornado simulation that you see here, and there"}, {"start": 152.08, "end": 156.36, "text": " is an alternative view where we only see the objects moving about."}, {"start": 156.36, "end": 162.48000000000002, "text": " So all this looks good, but really, how do we know how accurate this technique is?"}, {"start": 162.48, "end": 168.0, "text": " And now comes my favorite part, and this is when we let reality be our judge and compare"}, {"start": 168.0, "end": 172.12, "text": " the simulation results with real-world experiments."}, {"start": 172.12, "end": 176.88, "text": " Hold onto your papers while you observe the real experiment here on the left."}, {"start": 176.88, "end": 181.39999999999998, "text": " And now, the algorithmic reproduction of the same scene here."}, {"start": 181.39999999999998, "end": 183.04, "text": " How close are they?"}, {"start": 183.04, "end": 184.04, "text": " Goodness."}, {"start": 184.04, "end": 185.79999999999998, "text": " Very, very close."}, {"start": 185.79999999999998, "end": 190.48, "text": " I will stop the footage at different times so we can both evaluate it better."}, {"start": 190.48, "end": 192.23999999999998, "text": " Love it."}, {"start": 192.24, "end": 195.28, "text": " The technique can also undergo the wind tunnel test."}, {"start": 195.28, "end": 197.92000000000002, "text": " Here is the real footage."}, {"start": 197.92000000000002, "end": 201.56, "text": " And here is the simulation."}, {"start": 201.56, "end": 206.32000000000002, "text": " And it is truly remarkable how close this is able to match it, and I was wondering that"}, {"start": 206.32000000000002, "end": 211.36, "text": " even though someone who has been doing fluids for a while now is someone cropped this part"}, {"start": 211.36, "end": 217.48000000000002, "text": " of the image and told me that it is real-world footage, I would have believed it in a second."}, {"start": 217.48000000000002, "end": 219.08, "text": " Absolute insanity."}, {"start": 219.08, "end": 223.04000000000002, "text": " So how much do we have to wait to compute a simulation like this?"}, {"start": 223.04000000000002, "end": 224.64000000000001, "text": " Well, great news."}, {"start": 224.64000000000001, "end": 229.24, "text": " It uses your graphics card, which is typically the case for the more rudimentary fluid simulation"}, {"start": 229.24, "end": 234.94000000000003, "text": " algorithms out there, but the more elaborate ones typically don't support it, or at least"}, {"start": 234.94000000000003, "end": 238.36, "text": " not without a fair amount of additional effort."}, {"start": 238.36, "end": 245.68, "text": " The quickest example was this as it was simulated in less than 6 seconds, which I find to be mind-blowing."}, {"start": 245.68, "end": 251.52, "text": " The smoke simulation with box movements in a few seconds, I am truly out of words."}, {"start": 251.52, "end": 257.0, "text": " The rocket launch scene took the longest with 16 hours, while the falling plate example"}, {"start": 257.0, "end": 262.08, "text": " with the strong draft that threw the boxes around with about 4.5 hours of computation"}, {"start": 262.08, "end": 263.16, "text": " time."}, {"start": 263.16, "end": 268.44, "text": " The results depend greatly on Delta T, which is the size of the time steps, or in other"}, {"start": 268.44, "end": 273.64, "text": " words in how small increments we can advance time when creating these simulations to make"}, {"start": 273.64, "end": 276.8, "text": " sure we don't miss any important interactions."}, {"start": 276.8, "end": 282.59999999999997, "text": " You see here that in the rocket example, we have to simulate roughly 100,000 steps for"}, {"start": 282.59999999999997, "end": 285.12, "text": " every second of video footage."}, {"start": 285.12, "end": 287.24, "text": " No wonder it takes so long."}, {"start": 287.24, "end": 292.03999999999996, "text": " We have an easier time with this scene, where these time steps can be 50 times larger without"}, {"start": 292.03999999999996, "end": 296.24, "text": " losing any detail, and hence it goes much faster."}, {"start": 296.24, "end": 301.4, "text": " The great resolution also matters a great deal, which specifies how many spatial points"}, {"start": 301.4, "end": 303.59999999999997, "text": " the simulation has to take place in."}, {"start": 303.6, "end": 308.40000000000003, "text": " The higher the resolution of the grid, the larger the region we can cover, and the more"}, {"start": 308.40000000000003, "end": 310.44, "text": " details we can simulate."}, {"start": 310.44, "end": 314.92, "text": " As most research works, this technique doesn't come without limitations, however."}, {"start": 314.92, "end": 319.92, "text": " It is less accurate if we have simulations involving thin rods and shells, and typically"}, {"start": 319.92, "end": 324.88, "text": " uses 2 to 3 times more memory than a typical simulator program."}, {"start": 324.88, "end": 329.96000000000004, "text": " If these are the only trade-offs to create all this marvelous footage, sign me up this"}, {"start": 329.96000000000004, "end": 331.44, "text": " very second."}, {"start": 331.44, "end": 336.28, "text": " Overall, this paper is extraordinarily well written, and of course it has been accepted"}, {"start": 336.28, "end": 340.71999999999997, "text": " to the SIGGRAPH conference, one of the most prestigious scientific venues in computer"}, {"start": 340.71999999999997, "end": 342.24, "text": " graphics research."}, {"start": 342.24, "end": 346.56, "text": " Huge congratulations to the authors, and if you wish to see something beautiful today,"}, {"start": 346.56, "end": 349.96, "text": " make sure to have a look at the paper itself in the video description."}, {"start": 349.96, "end": 351.88, "text": " Truly stunning work."}, {"start": 351.88, "end": 353.52, "text": " What a time to be alive!"}, {"start": 353.52, "end": 356.96, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 356.96, "end": 362.91999999999996, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 362.91999999999996, "end": 370.68, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto"}, {"start": 370.68, "end": 377.32, "text": " your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 377.32, "end": 382.84, "text": " Plus they are the only Cloud service with 48GB RTX 8000."}, {"start": 382.84, "end": 389.23999999999995, "text": " And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 389.23999999999995, "end": 391.03999999999996, "text": " workstations or servers."}, {"start": 391.03999999999996, "end": 396.4, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU"}, {"start": 396.4, "end": 397.71999999999997, "text": " instances today."}, {"start": 397.71999999999997, "end": 402.47999999999996, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 402.47999999999996, "end": 403.47999999999996, "text": " for you."}, {"start": 403.48, "end": 430.92, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=5NM_WBI9UBE
This AI Creates Human Faces From Your Sketches!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their instrumentation of a previous paper is available here: https://app.wandb.ai/stacey/greenscreen/reports/Two-Shots-to-Green-Screen%3A-Collage-with-Deep-Learning--VmlldzoxMDc4MjY Their report on this paper is available here: https://app.wandb.ai/authors/deepfacedrawing/reports/DeepFaceDrawing-An-Overview--VmlldzoyMjgxNzM 📝 The paper "DeepFaceDrawing: Deep Generation of Face Images from Sketches" is available here: http://geometrylearning.com/DeepFaceDrawing/ Alternative paper link if it is down: https://arxiv.org/abs/2006.01047 Our earlier video on sketch tutorials is available here: https://www.youtube.com/watch?v=brs1qCDzRdk 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajon Aifahir. In 2017, so more than 300 episodes ago, we talked about an algorithm that took a 3D model of a complex object and would give us an easy to follow step-by-step breakdown on how to draw it. Automated drawing tutorials, if you will. This was a handcrafted algorithm that used graph theory to break these 3D objects into smaller, easier to manage pieces, and since then, learning algorithms have improved so much that we started looking more and more to the opposite direction. And that opposite direction would be giving a crew drawing to the machine and getting a photorealistic image. Now, that sounds like science fiction, until we realize that scientists at Nvidia already had an amazing algorithm for this around 1.5 years ago. In that work, the input was a labeling which we can draw ourselves and the output is a hopefully photorealistic landscape image that adheres to these labels. I love how first only the silhouette of the rock is drawn, so we have this hollow thing on the right that is not very realistic and then it is now filled in with the bucket tool and there you go. And next thing you know, you have an amazing looking landscape image. It was capable of much, much more, but what it couldn't do is synthesize human faces this way. And believe it or not, this is what today's technique is able to do. Look, in goes our crew'd sketch as a guide image and out comes a nearly photorealistic human face that matches it. Interestingly, before we draw the hair itself, it gives us something as a starting point, but if we choose to, we can also change the hair shape and the outputs will follow our drawing really well. But it goes much further than this as it boasts a few additional appealing features. For instance, it not only refines the output as we change our drawing, but since one crew'd input can be mapped to many many possible people, these output images can also be further or directed with these sliders. According to the included user study, journeyman users mainly appreciated the variety they can achieve with this algorithm. If you look here, you can get a taste of that while professionals were more excited about the controllability aspect of this method. That was showcased with the footage with the sliders. Another really cool thing that it can do is called face copy paste where we don't even need to draw anything and just take a few aspects of human faces that we would like to combine. And there you go. Absolutely amazing. This work is not without failure cases, however. You have probably noticed, but the AI is not explicitly instructed to match the eye colors where some asymmetry may arise in the output. I am sure this will be improved just one more paper down the line and I am really curious where digital artists will take these techniques in the near future. The objective is always to get out of the way and help the artist spend more time bringing their artistic vision to life and spend less time on the execution. This is exactly what these techniques can help with. What a time to be alive. What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Wates and biases provides tools to track your experiments in your deep learning projects. Your system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajon Aifahir."}, {"start": 4.64, "end": 11.72, "text": " In 2017, so more than 300 episodes ago, we talked about an algorithm that took a 3D model"}, {"start": 11.72, "end": 17.36, "text": " of a complex object and would give us an easy to follow step-by-step breakdown on how"}, {"start": 17.36, "end": 18.36, "text": " to draw it."}, {"start": 18.36, "end": 21.16, "text": " Automated drawing tutorials, if you will."}, {"start": 21.16, "end": 25.92, "text": " This was a handcrafted algorithm that used graph theory to break these 3D objects into"}, {"start": 25.92, "end": 31.64, "text": " smaller, easier to manage pieces, and since then, learning algorithms have improved so"}, {"start": 31.64, "end": 36.08, "text": " much that we started looking more and more to the opposite direction."}, {"start": 36.08, "end": 40.68, "text": " And that opposite direction would be giving a crew drawing to the machine and getting a"}, {"start": 40.68, "end": 42.52, "text": " photorealistic image."}, {"start": 42.52, "end": 48.6, "text": " Now, that sounds like science fiction, until we realize that scientists at Nvidia already"}, {"start": 48.6, "end": 53.0, "text": " had an amazing algorithm for this around 1.5 years ago."}, {"start": 53.0, "end": 57.68, "text": " In that work, the input was a labeling which we can draw ourselves and the output is"}, {"start": 57.68, "end": 62.120000000000005, "text": " a hopefully photorealistic landscape image that adheres to these labels."}, {"start": 62.120000000000005, "end": 67.32, "text": " I love how first only the silhouette of the rock is drawn, so we have this hollow thing"}, {"start": 67.32, "end": 72.36, "text": " on the right that is not very realistic and then it is now filled in with the bucket"}, {"start": 72.36, "end": 75.16, "text": " tool and there you go."}, {"start": 75.16, "end": 79.24000000000001, "text": " And next thing you know, you have an amazing looking landscape image."}, {"start": 79.24, "end": 84.8, "text": " It was capable of much, much more, but what it couldn't do is synthesize human faces"}, {"start": 84.8, "end": 85.8, "text": " this way."}, {"start": 85.8, "end": 90.32, "text": " And believe it or not, this is what today's technique is able to do."}, {"start": 90.32, "end": 96.36, "text": " Look, in goes our crew'd sketch as a guide image and out comes a nearly photorealistic"}, {"start": 96.36, "end": 98.96, "text": " human face that matches it."}, {"start": 98.96, "end": 104.28, "text": " Interestingly, before we draw the hair itself, it gives us something as a starting point,"}, {"start": 104.28, "end": 109.4, "text": " but if we choose to, we can also change the hair shape and the outputs will follow our"}, {"start": 109.4, "end": 111.4, "text": " drawing really well."}, {"start": 111.4, "end": 116.6, "text": " But it goes much further than this as it boasts a few additional appealing features."}, {"start": 116.6, "end": 122.24000000000001, "text": " For instance, it not only refines the output as we change our drawing, but since one crew'd"}, {"start": 122.24000000000001, "end": 128.08, "text": " input can be mapped to many many possible people, these output images can also be further"}, {"start": 128.08, "end": 130.36, "text": " or directed with these sliders."}, {"start": 130.36, "end": 135.52, "text": " According to the included user study, journeyman users mainly appreciated the variety they"}, {"start": 135.52, "end": 137.76000000000002, "text": " can achieve with this algorithm."}, {"start": 137.76000000000002, "end": 142.84, "text": " If you look here, you can get a taste of that while professionals were more excited about"}, {"start": 142.84, "end": 145.68, "text": " the controllability aspect of this method."}, {"start": 145.68, "end": 148.64000000000001, "text": " That was showcased with the footage with the sliders."}, {"start": 148.64000000000001, "end": 153.68, "text": " Another really cool thing that it can do is called face copy paste where we don't even"}, {"start": 153.68, "end": 159.92000000000002, "text": " need to draw anything and just take a few aspects of human faces that we would like to combine."}, {"start": 159.92, "end": 163.79999999999998, "text": " And there you go."}, {"start": 163.79999999999998, "end": 165.07999999999998, "text": " Absolutely amazing."}, {"start": 165.07999999999998, "end": 167.83999999999997, "text": " This work is not without failure cases, however."}, {"start": 167.83999999999997, "end": 173.35999999999999, "text": " You have probably noticed, but the AI is not explicitly instructed to match the eye colors"}, {"start": 173.35999999999999, "end": 176.44, "text": " where some asymmetry may arise in the output."}, {"start": 176.44, "end": 180.92, "text": " I am sure this will be improved just one more paper down the line and I am really curious"}, {"start": 180.92, "end": 184.79999999999998, "text": " where digital artists will take these techniques in the near future."}, {"start": 184.79999999999998, "end": 189.83999999999997, "text": " The objective is always to get out of the way and help the artist spend more time bringing"}, {"start": 189.84, "end": 194.92000000000002, "text": " their artistic vision to life and spend less time on the execution."}, {"start": 194.92000000000002, "end": 198.16, "text": " This is exactly what these techniques can help with."}, {"start": 198.16, "end": 199.84, "text": " What a time to be alive."}, {"start": 199.84, "end": 204.32, "text": " What you see here is an instrumentation for a previous paper that we covered in this"}, {"start": 204.32, "end": 207.56, "text": " series which was made by weights and biases."}, {"start": 207.56, "end": 213.04, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 213.04, "end": 217.64000000000001, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 217.64, "end": 222.44, "text": " Your system is designed to save you a ton of time and money and it is actively used"}, {"start": 222.44, "end": 229.16, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 229.16, "end": 234.07999999999998, "text": " And the best part is that if you have an open source, academic or personal project,"}, {"start": 234.07999999999998, "end": 236.23999999999998, "text": " you can use their tools for free."}, {"start": 236.23999999999998, "end": 238.76, "text": " It really is as good as it gets."}, {"start": 238.76, "end": 244.83999999999997, "text": " Make sure to visit them through wnb.com slash papers or click the link in the video description"}, {"start": 244.84, "end": 248.20000000000002, "text": " to start tracking your experiments in 5 minutes."}, {"start": 248.20000000000002, "end": 252.8, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 252.8, "end": 254.2, "text": " better videos for you."}, {"start": 254.2, "end": 283.76, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MD_k3p4MH-A
Can We Simulate Merging Bubbles? 🌊
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/ajayuppili/efficientnet/reports/How-Efficient-is-EfficientNet%3F--Vmlldzo4NTk5MQ 📝 The paper "Constraint Bubbles and Affine Regions: Reduced Fluid Models for Efficient Immersed Bubbles and Flexible Spatial Coarsening" is available here: https://cs.uwaterloo.ca/~rgoldade/reducedfluids/ Check out Blender here (free): https://www.blender.org/ If you wish to play with some fluids, try the FLIP Fluids plugin (paid, with free demo): https://flipfluids.com/ Note that Blender also contains Mantaflow, its own fluid simulation program and that's also great! 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. If we write the laws of fluid motion into a computer program, we can create beautiful water simulations, like the one you see here. However, with all the progress in computer graphics research, we can not only simulate the water volume itself, but there are also efficient techniques to add foam, spray, and bubbles to this simulation. The even crazier thing is that this paper from 8 years ago can do all three in one go, and is remarkably simple for what it does. Just look at this heavenly footage all simulated on a computer by using Blender, a piece of free and open source software, and the flip fluids plugin. But all this has been possible for quite a while now, so what happened in the 8 years since this paper has been published? How has this been improved? Well, it's good to have bubbles in our simulation, however, in real life, bubbles have their individual densities and can coalesce at a moment's notice. This technique is able to simulate these events, and you will see that it offers much, much more. Now, let's marvel at three different phenomena in this simulation. First, the bubbles here are less dense than the water, and hence start to rise, then look at the interaction with the air. Now, after this, the bubbles that got denser than the water start sinking again, and all this can be done on your computer today. What a beautiful simulation! And now, hold on to your papers because this method also adds simulating air pressure, which opens up the possibility for an interaction to happen at a distance. Look, first we start pushing the piston here. The layer of air starts to push the fluid, which weighs on the next air pocket, and so on. Such a beautiful phenomenon! And let's not miss the best part. When we pull the piston back, the emerging negative flux starts drawing the liquid back. One more time. Simulating all this efficiently is quite a technical marvel. When reading through the paper, I was very surprised to see that it is able to incorporate this air compression without simulating the air gaps themselves. A simulation without simulation, if you will. Let's simulate pouring water through the neck of the water cooler with a standard already existing technique. For some reason, it doesn't look right, does it? So, what's missing here? We see a vast downward flow of liquid, therefore there also has to be a vast upward flow of air at the same time, but I don't see any of that here. Let's see how the new simulation method handles this. We start the downflow, and yes, huge air bubbles are coming up, creating this beautiful, glugging effect. I think I now have a good guess as to what scientists are discussing over the water cooler in Professor Christopher Batty's research group. So, how long do we have to wait to get these results? You see, the quality of the outputs is nearly the same as the reference simulation, however, it takes less than half the amount of time to produce it. Admittedly, these simulations still take a few hours to complete, but it is absolutely amazing that this beautiful, complex phenomena can be simulated in a reasonable amount of time, and you know the drill, two more papers down the line, and it will be improved significantly. But we don't necessarily need a bubbly simulation to enjoy the advantages of this method. In this scene, we see a detailed splash, where the one on the right here was simulated with a new method, and it also matches the reference solution, and it was more than three times faster. If you have a look at the paper in the video description, you will see how it simplifies the simulation by finding a way to identify regions of the simulation domain where not a lot is happening, and course on the simulation there. These are the green regions that you see here, and the paper refers to them as F-Fine regions. As you see, the progress in computer graphics and fluid simulation research is absolutely stunning, and these amazing papers just keep coming out year after year. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their system to train a neural network architecture called EfficientNet. The whole thing is beautifully explained there, so make sure to click the link in the video description. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.nba.com slash papers, or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 8.16, "text": " If we write the laws of fluid motion into a computer program,"}, {"start": 8.16, "end": 12.64, "text": " we can create beautiful water simulations, like the one you see here."}, {"start": 12.64, "end": 15.84, "text": " However, with all the progress in computer graphics research,"}, {"start": 15.84, "end": 18.64, "text": " we can not only simulate the water volume itself,"}, {"start": 18.64, "end": 24.240000000000002, "text": " but there are also efficient techniques to add foam, spray, and bubbles to this simulation."}, {"start": 24.24, "end": 31.36, "text": " The even crazier thing is that this paper from 8 years ago can do all three in one go,"}, {"start": 31.36, "end": 33.839999999999996, "text": " and is remarkably simple for what it does."}, {"start": 34.64, "end": 40.08, "text": " Just look at this heavenly footage all simulated on a computer by using Blender,"}, {"start": 40.08, "end": 44.64, "text": " a piece of free and open source software, and the flip fluids plugin."}, {"start": 45.28, "end": 48.239999999999995, "text": " But all this has been possible for quite a while now,"}, {"start": 48.239999999999995, "end": 52.879999999999995, "text": " so what happened in the 8 years since this paper has been published?"}, {"start": 52.88, "end": 54.400000000000006, "text": " How has this been improved?"}, {"start": 55.120000000000005, "end": 57.92, "text": " Well, it's good to have bubbles in our simulation,"}, {"start": 57.92, "end": 62.160000000000004, "text": " however, in real life, bubbles have their individual densities"}, {"start": 62.160000000000004, "end": 64.88, "text": " and can coalesce at a moment's notice."}, {"start": 64.88, "end": 67.36, "text": " This technique is able to simulate these events,"}, {"start": 67.36, "end": 70.16, "text": " and you will see that it offers much, much more."}, {"start": 70.80000000000001, "end": 74.56, "text": " Now, let's marvel at three different phenomena in this simulation."}, {"start": 74.88, "end": 78.64, "text": " First, the bubbles here are less dense than the water,"}, {"start": 78.64, "end": 83.12, "text": " and hence start to rise, then look at the interaction with the air."}, {"start": 83.76, "end": 89.6, "text": " Now, after this, the bubbles that got denser than the water start sinking again,"}, {"start": 89.6, "end": 93.12, "text": " and all this can be done on your computer today."}, {"start": 93.12, "end": 94.88, "text": " What a beautiful simulation!"}, {"start": 96.88, "end": 102.48, "text": " And now, hold on to your papers because this method also adds simulating air pressure,"}, {"start": 102.48, "end": 106.72, "text": " which opens up the possibility for an interaction to happen at a distance."}, {"start": 106.72, "end": 110.24, "text": " Look, first we start pushing the piston here."}, {"start": 110.24, "end": 115.36, "text": " The layer of air starts to push the fluid, which weighs on the next air pocket,"}, {"start": 115.36, "end": 117.52, "text": " and so on."}, {"start": 117.52, "end": 119.92, "text": " Such a beautiful phenomenon!"}, {"start": 119.92, "end": 121.6, "text": " And let's not miss the best part."}, {"start": 121.6, "end": 127.12, "text": " When we pull the piston back, the emerging negative flux starts drawing the liquid back."}, {"start": 127.12, "end": 129.2, "text": " One more time."}, {"start": 129.2, "end": 133.92, "text": " Simulating all this efficiently is quite a technical marvel."}, {"start": 133.92, "end": 139.92, "text": " When reading through the paper, I was very surprised to see that it is able to incorporate this air"}, {"start": 139.92, "end": 143.11999999999998, "text": " compression without simulating the air gaps themselves."}, {"start": 143.67999999999998, "end": 146.23999999999998, "text": " A simulation without simulation, if you will."}, {"start": 147.04, "end": 152.95999999999998, "text": " Let's simulate pouring water through the neck of the water cooler with a standard already existing technique."}, {"start": 154.0, "end": 156.95999999999998, "text": " For some reason, it doesn't look right, does it?"}, {"start": 157.6, "end": 159.51999999999998, "text": " So, what's missing here?"}, {"start": 159.52, "end": 166.56, "text": " We see a vast downward flow of liquid, therefore there also has to be a vast upward flow of air"}, {"start": 166.56, "end": 169.92000000000002, "text": " at the same time, but I don't see any of that here."}, {"start": 170.48000000000002, "end": 173.60000000000002, "text": " Let's see how the new simulation method handles this."}, {"start": 173.60000000000002, "end": 179.20000000000002, "text": " We start the downflow, and yes, huge air bubbles are coming up,"}, {"start": 179.20000000000002, "end": 181.60000000000002, "text": " creating this beautiful, glugging effect."}, {"start": 182.4, "end": 187.76000000000002, "text": " I think I now have a good guess as to what scientists are discussing over the water cooler"}, {"start": 187.76, "end": 190.07999999999998, "text": " in Professor Christopher Batty's research group."}, {"start": 190.72, "end": 193.6, "text": " So, how long do we have to wait to get these results?"}, {"start": 194.23999999999998, "end": 198.72, "text": " You see, the quality of the outputs is nearly the same as the reference simulation,"}, {"start": 198.72, "end": 202.79999999999998, "text": " however, it takes less than half the amount of time to produce it."}, {"start": 203.35999999999999, "end": 207.04, "text": " Admittedly, these simulations still take a few hours to complete,"}, {"start": 207.04, "end": 212.23999999999998, "text": " but it is absolutely amazing that this beautiful, complex phenomena can be simulated"}, {"start": 212.23999999999998, "end": 216.64, "text": " in a reasonable amount of time, and you know the drill, two more papers down the line,"}, {"start": 216.64, "end": 219.04, "text": " and it will be improved significantly."}, {"start": 219.04, "end": 224.88, "text": " But we don't necessarily need a bubbly simulation to enjoy the advantages of this method."}, {"start": 224.88, "end": 229.92, "text": " In this scene, we see a detailed splash, where the one on the right here was simulated with"}, {"start": 229.92, "end": 236.32, "text": " a new method, and it also matches the reference solution, and it was more than three times faster."}, {"start": 237.2, "end": 241.44, "text": " If you have a look at the paper in the video description, you will see how it simplifies the"}, {"start": 241.44, "end": 248.07999999999998, "text": " simulation by finding a way to identify regions of the simulation domain where not a lot is happening,"}, {"start": 248.07999999999998, "end": 252.4, "text": " and course on the simulation there. These are the green regions that you see here,"}, {"start": 252.4, "end": 258.0, "text": " and the paper refers to them as F-Fine regions. As you see, the progress in computer graphics and"}, {"start": 258.0, "end": 263.92, "text": " fluid simulation research is absolutely stunning, and these amazing papers just keep coming out"}, {"start": 263.92, "end": 270.4, "text": " year after year. What a time to be alive! This episode has been supported by weights and biases."}, {"start": 270.4, "end": 275.52, "text": " In this post, they show you how to use their system to train a neural network architecture"}, {"start": 275.52, "end": 280.88, "text": " called EfficientNet. The whole thing is beautifully explained there, so make sure to click the link"}, {"start": 280.88, "end": 285.59999999999997, "text": " in the video description. Weight and biases provides tools to track your experiments in your"}, {"start": 285.59999999999997, "end": 290.96, "text": " deep learning projects. Their system is designed to save you a ton of time and money, and it is"}, {"start": 290.96, "end": 298.23999999999995, "text": " actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 298.24, "end": 303.28000000000003, "text": " And the best part is that if you have an open source, academic, or personal project,"}, {"start": 303.28000000000003, "end": 309.2, "text": " you can use their tools for free. It really is as good as it gets. Make sure to visit them through"}, {"start": 309.2, "end": 316.0, "text": " www.nba.com slash papers, or click the link in the video description to start tracking your experiments"}, {"start": 316.0, "end": 320.56, "text": " in five minutes. Our thanks to weights and biases for their long-standing support,"}, {"start": 320.56, "end": 325.6, "text": " and for helping us make better videos for you. Thanks for watching and for your generous support,"}, {"start": 325.6, "end": 328.64000000000004, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-6Xn4nKm-Qw
OpenAI’s Image GPT Completes Your Images With Style!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/ayush-thakur/interpretability/reports/Interpretability-in-Deep-Learning-with-W%26B---GradCAM--Vmlldzo5MTIyNw 📝 The paper "Generative Pretraining from Pixels (Image GPT)" is available here: https://openai.com/blog/image-gpt/ Tweets: Website layout: https://twitter.com/sharifshameem/status/1283322990625607681 Plots: https://twitter.com/aquariusacquah/status/1285415144017797126?s=12 Typesetting math: https://twitter.com/pavtalk/status/1285410751092416513 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #GPT3
Dear Fellow Scholars, this is two-minute papers with Dr. Karojejol Nefaher. In early 2019, a learning-based technique appeared that could perform common natural language processing operations, for instance, answering questions, completing texts, reading comprehension, summarization, and more. This method was developed by scientists at OpenAI and they called it GPT2. A follow-up paper introduced a more capable version of this technique called GPT3, and among many incredible examples, it could generate website layouts from a written description. The key idea in both cases was that we would provide it an incomplete piece of text, and it would try to finish it. However, no one said that these neural networks have to only deal with text information, and sure enough, in this work, scientists at OpenAI introduced a new version of this method that tries to complete not text, but images. The problem statement is simple. We give it an incomplete image, and we ask the AI to fill in the missing pixels. That is, of course, an immensely difficult task, because these images made the picked any part of the world around us. It would have to know a great deal about our world to be able to continue the images, so how well did it do? Let's have a look. This is undoubtedly a cat. And look, see that white part that is just starting. The interesting part has been cut out of the image. What could that be? A piece of paper, or something else? Now, let's leave the dirty work to the machine and ask it to finish it. Wow, a piece of paper indeed, according to the AI, and it even has text on it. But the text has a heading section and a paragraph below it too. Truly excellent. You know what is even more excellent? Perhaps the best part. It also added the indirect illumination on the fur of the cat, meaning that it sees that a blue room surrounds it, and therefore some amount of the color bleeds onto the fur of the cat, making it blower. I am a light transport researcher by trade, so I spend the majority of my life calculating things like this, and I have to say that this looks quite good to me. Absolutely amazing attention to detail. But it had more ideas. What's this? The face of the cat has been finished quite well in fact, but the rest, I am not so sure. If you have an idea what this is supposed to be, please let me know in the comments. And here go the rest of the results. All quite good. And the true, real image has been concealed for the algorithm. This is the reference solution. Let's see the next one. Oh my, scientists at OpenAI pulled no punches here, this is also quite nasty. How many stripes should this continue with? Zero, maybe, in any case, this solution is not unreasonable. I appreciate the fact that it continued the shadows of the humans. Next one. Yes, more stripes, great, but likely, a few too many. We are the remainder of the solutions, and the true reference image again. Let's have a look at this water droplet example too. We humans know that since we see the remnants of some ripples over there too, there must be a splash, but does the AI know? Oh yes, yes it does. Amazing. And the true image. Now, what about these little creatures? The first continuation finishes them correctly and puts them on a twig. The second one involves a stone. The third is my favorite, hold on to your papers, and look at this. They stand in the water and we can even see their mirror images. Wow. The fourth is a branch, and finally, the true reference image. This is one of its best works I have seen so far. There are some more results and note that these are not cherry-picked, or in other words, there was no selection process for the results. Nothing was discarded. This came out from the AI as you see them. There is a link to these and to the paper in the video description, so make sure to have a look and let me know in the comments if you have found something interesting. So what about the size of the neural network for this technique? Well, it contains from 1.5 to about 7 billion parameters. Let's have a look together and find out what that means. These are the results from the GPT2 paper, the previous version of the text processor, on a challenging reading comprehension test as a function of the number of parameters. As you see, around 1.5 billion parameters, which is roughly similar to GPT2, it learned a great deal, but its understanding was nowhere near the level of human comprehension. However, as they grew the network, something incredible happened. Non-trivial capabilities started to appear as we approached 100 billion parameters. Look, it nearly matched the level of humans, and all this was measured on a nasty reading comprehension test. So this image GPT has the number of parameters that is closer to GPT2 than GPT3, so we can maybe speculate that the next version could be, potentially, another explosion in capabilities. I can't wait to have a look at that. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their system to visualize which part of the image your neural network looks at before it concludes that it is a cat. You can even try an example in an interactive notebook through the link in the video description. It's and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers, or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojejol Nefaher."}, {"start": 4.72, "end": 9.88, "text": " In early 2019, a learning-based technique appeared that could perform common natural language"}, {"start": 9.88, "end": 16.44, "text": " processing operations, for instance, answering questions, completing texts, reading comprehension,"}, {"start": 16.44, "end": 18.48, "text": " summarization, and more."}, {"start": 18.48, "end": 23.88, "text": " This method was developed by scientists at OpenAI and they called it GPT2."}, {"start": 23.88, "end": 29.799999999999997, "text": " A follow-up paper introduced a more capable version of this technique called GPT3, and among"}, {"start": 29.799999999999997, "end": 35.0, "text": " many incredible examples, it could generate website layouts from a written description."}, {"start": 35.0, "end": 40.76, "text": " The key idea in both cases was that we would provide it an incomplete piece of text,"}, {"start": 40.76, "end": 42.64, "text": " and it would try to finish it."}, {"start": 42.64, "end": 48.04, "text": " However, no one said that these neural networks have to only deal with text information,"}, {"start": 48.04, "end": 53.44, "text": " and sure enough, in this work, scientists at OpenAI introduced a new version of this method"}, {"start": 53.44, "end": 57.4, "text": " that tries to complete not text, but images."}, {"start": 57.4, "end": 59.32, "text": " The problem statement is simple."}, {"start": 59.32, "end": 64.6, "text": " We give it an incomplete image, and we ask the AI to fill in the missing pixels."}, {"start": 64.6, "end": 69.32, "text": " That is, of course, an immensely difficult task, because these images made the picked"}, {"start": 69.32, "end": 71.56, "text": " any part of the world around us."}, {"start": 71.56, "end": 76.2, "text": " It would have to know a great deal about our world to be able to continue the images,"}, {"start": 76.2, "end": 78.4, "text": " so how well did it do?"}, {"start": 78.4, "end": 79.72, "text": " Let's have a look."}, {"start": 79.72, "end": 82.32, "text": " This is undoubtedly a cat."}, {"start": 82.32, "end": 86.27999999999999, "text": " And look, see that white part that is just starting."}, {"start": 86.27999999999999, "end": 89.67999999999999, "text": " The interesting part has been cut out of the image."}, {"start": 89.67999999999999, "end": 90.67999999999999, "text": " What could that be?"}, {"start": 90.67999999999999, "end": 93.28, "text": " A piece of paper, or something else?"}, {"start": 93.28, "end": 98.11999999999999, "text": " Now, let's leave the dirty work to the machine and ask it to finish it."}, {"start": 98.11999999999999, "end": 105.72, "text": " Wow, a piece of paper indeed, according to the AI, and it even has text on it."}, {"start": 105.72, "end": 110.35999999999999, "text": " But the text has a heading section and a paragraph below it too."}, {"start": 110.35999999999999, "end": 111.88, "text": " Truly excellent."}, {"start": 111.88, "end": 114.11999999999999, "text": " You know what is even more excellent?"}, {"start": 114.11999999999999, "end": 115.83999999999999, "text": " Perhaps the best part."}, {"start": 115.83999999999999, "end": 121.16, "text": " It also added the indirect illumination on the fur of the cat, meaning that it sees that"}, {"start": 121.16, "end": 126.08, "text": " a blue room surrounds it, and therefore some amount of the color bleeds onto the fur"}, {"start": 126.08, "end": 128.48, "text": " of the cat, making it blower."}, {"start": 128.48, "end": 133.35999999999999, "text": " I am a light transport researcher by trade, so I spend the majority of my life calculating"}, {"start": 133.35999999999999, "end": 138.12, "text": " things like this, and I have to say that this looks quite good to me."}, {"start": 138.12, "end": 140.6, "text": " Absolutely amazing attention to detail."}, {"start": 140.6, "end": 142.48, "text": " But it had more ideas."}, {"start": 142.48, "end": 143.48, "text": " What's this?"}, {"start": 143.48, "end": 148.88, "text": " The face of the cat has been finished quite well in fact, but the rest, I am not so sure."}, {"start": 148.88, "end": 153.12, "text": " If you have an idea what this is supposed to be, please let me know in the comments."}, {"start": 153.12, "end": 155.56, "text": " And here go the rest of the results."}, {"start": 155.56, "end": 157.28, "text": " All quite good."}, {"start": 157.28, "end": 161.6, "text": " And the true, real image has been concealed for the algorithm."}, {"start": 161.6, "end": 163.76, "text": " This is the reference solution."}, {"start": 163.76, "end": 165.28, "text": " Let's see the next one."}, {"start": 165.28, "end": 172.0, "text": " Oh my, scientists at OpenAI pulled no punches here, this is also quite nasty."}, {"start": 172.0, "end": 174.84, "text": " How many stripes should this continue with?"}, {"start": 174.84, "end": 180.24, "text": " Zero, maybe, in any case, this solution is not unreasonable."}, {"start": 180.24, "end": 185.48, "text": " I appreciate the fact that it continued the shadows of the humans."}, {"start": 185.48, "end": 186.48, "text": " Next one."}, {"start": 186.48, "end": 191.6, "text": " Yes, more stripes, great, but likely, a few too many."}, {"start": 191.6, "end": 197.79999999999998, "text": " We are the remainder of the solutions, and the true reference image again."}, {"start": 197.79999999999998, "end": 201.0, "text": " Let's have a look at this water droplet example too."}, {"start": 201.0, "end": 205.92, "text": " We humans know that since we see the remnants of some ripples over there too, there must"}, {"start": 205.92, "end": 209.84, "text": " be a splash, but does the AI know?"}, {"start": 209.84, "end": 213.32, "text": " Oh yes, yes it does."}, {"start": 213.32, "end": 214.88, "text": " Amazing."}, {"start": 214.88, "end": 217.07999999999998, "text": " And the true image."}, {"start": 217.07999999999998, "end": 220.95999999999998, "text": " Now, what about these little creatures?"}, {"start": 220.96, "end": 225.72, "text": " The first continuation finishes them correctly and puts them on a twig."}, {"start": 225.72, "end": 229.12, "text": " The second one involves a stone."}, {"start": 229.12, "end": 234.4, "text": " The third is my favorite, hold on to your papers, and look at this."}, {"start": 234.4, "end": 238.84, "text": " They stand in the water and we can even see their mirror images."}, {"start": 238.84, "end": 240.52, "text": " Wow."}, {"start": 240.52, "end": 245.32, "text": " The fourth is a branch, and finally, the true reference image."}, {"start": 245.32, "end": 249.08, "text": " This is one of its best works I have seen so far."}, {"start": 249.08, "end": 254.20000000000002, "text": " There are some more results and note that these are not cherry-picked, or in other words,"}, {"start": 254.20000000000002, "end": 256.92, "text": " there was no selection process for the results."}, {"start": 256.92, "end": 258.04, "text": " Nothing was discarded."}, {"start": 258.04, "end": 260.96000000000004, "text": " This came out from the AI as you see them."}, {"start": 260.96000000000004, "end": 264.84000000000003, "text": " There is a link to these and to the paper in the video description, so make sure to have"}, {"start": 264.84000000000003, "end": 268.88, "text": " a look and let me know in the comments if you have found something interesting."}, {"start": 268.88, "end": 272.64, "text": " So what about the size of the neural network for this technique?"}, {"start": 272.64, "end": 277.52000000000004, "text": " Well, it contains from 1.5 to about 7 billion parameters."}, {"start": 277.52, "end": 280.84, "text": " Let's have a look together and find out what that means."}, {"start": 280.84, "end": 285.79999999999995, "text": " These are the results from the GPT2 paper, the previous version of the text processor,"}, {"start": 285.79999999999995, "end": 291.08, "text": " on a challenging reading comprehension test as a function of the number of parameters."}, {"start": 291.08, "end": 297.12, "text": " As you see, around 1.5 billion parameters, which is roughly similar to GPT2, it learned"}, {"start": 297.12, "end": 303.0, "text": " a great deal, but its understanding was nowhere near the level of human comprehension."}, {"start": 303.0, "end": 307.76, "text": " However, as they grew the network, something incredible happened."}, {"start": 307.76, "end": 312.96, "text": " Non-trivial capabilities started to appear as we approached 100 billion parameters."}, {"start": 312.96, "end": 319.44, "text": " Look, it nearly matched the level of humans, and all this was measured on a nasty reading"}, {"start": 319.44, "end": 321.24, "text": " comprehension test."}, {"start": 321.24, "end": 328.44, "text": " So this image GPT has the number of parameters that is closer to GPT2 than GPT3, so we can"}, {"start": 328.44, "end": 334.8, "text": " maybe speculate that the next version could be, potentially, another explosion in capabilities."}, {"start": 334.8, "end": 337.12, "text": " I can't wait to have a look at that."}, {"start": 337.12, "end": 338.84, "text": " What a time to be alive!"}, {"start": 338.84, "end": 341.88, "text": " This episode has been supported by weights and biases."}, {"start": 341.88, "end": 346.72, "text": " In this post, they show you how to use their system to visualize which part of the image"}, {"start": 346.72, "end": 350.84, "text": " your neural network looks at before it concludes that it is a cat."}, {"start": 350.84, "end": 356.04, "text": " You can even try an example in an interactive notebook through the link in the video description."}, {"start": 356.04, "end": 360.68, "text": " It's and biases provides tools to track your experiments in your deep learning projects."}, {"start": 360.68, "end": 365.48, "text": " Their system is designed to save you a ton of time and money, and it is actively used"}, {"start": 365.48, "end": 372.20000000000005, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 372.20000000000005, "end": 377.14000000000004, "text": " And the best part is that if you have an open source, academic, or personal project,"}, {"start": 377.14000000000004, "end": 379.28000000000003, "text": " you can use their tools for free."}, {"start": 379.28000000000003, "end": 381.8, "text": " It really is as good as it gets."}, {"start": 381.8, "end": 387.88, "text": " Make sure to visit them through wnbe.com slash papers, or click the link in the video description"}, {"start": 387.88, "end": 391.24, "text": " to start tracking your experiments in 5 minutes."}, {"start": 391.24, "end": 395.84000000000003, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 395.84000000000003, "end": 397.24, "text": " better videos for you."}, {"start": 397.24, "end": 427.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=iKvlOviWs3E
This AI Creates Images Of Nearly Any Animal! 🦉
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "COCO-FUNIT: Few-Shot Unsupervised Image Translation with a Content Conditioned Style Encoder" is available here: https://nvlabs.github.io/COCO-FUNIT/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifahir, the research field of image translation with the aid of learning algorithms has been on fire lately. For instance, this earlier technique would look at a large number of animal faces and could interpolate between them or, in other words, blend one kind of dog into another breed. But that's not all because it could even transform dogs into cats or create these glorious plump cats and cheetahs. The results were absolutely stunning. However, it would only work on the domains it was trained on. In other words, it could only translate to and from species that it took the time to learn about. This new method offers something really amazing. It can handle multiple domains or multiple breeds, if you will, even ones that it hadn't seen previously. That sounds flat out impossible, so let's have a look at some results. This dog will be used as content, therefore the output should have a similar pose, but its breed has to be changed to this one. But there is one little problem. And that problem is that the AI has never seen this breed before. This will be very challenging because we only see the head of the dog used for style. Should the body of the dog also get curly hair? You only know if you know this particular dog breed, or if you're smart and can infer missing information by looking at other kinds of dogs. Let's see the result. Incredible. The remnants of the leash also remain there in the output results. It also did a nearly impeccable job with this bird, where again the style image is from a previously unseen breed. Now, this is of course a remarkably difficult problem domain, translating into different kinds of animals that you know nothing about, apart from a tiny, cropped image, this would be quite a challenge even for a human. However, this one is not the first technique to attempt to solve it, so let's see how it stacks up against a previous method. This one is from just a year ago, and you will see in a moment how much this field has progressed since then. For instance, in this output we get two dogs which seems to be a mix of the content and the style dog. And while the new method still seems to have some structural issues, the dog type and the pose is indeed correct. The rest of the results also appear to be significantly better. But what do you think? Did you notice something weird? Let me know in the comments below. And now, less transition into image interpolation. This will be a touch more involved than previous interpolation efforts. You see, in this previous paper we had a source and a target image, and the AI was asked to generate intermediate images between them. Simple enough. In this case, however, we have not two but three images as an input. There will be a content image. This will provide the high-level features, such as pose, and its style is going to transition from this to this. The goal is that the content image remains intact while transforming one breed or species into another. This particular example is one of my favorites. Such a beautiful transition and surely not all, but many of the intermediate images could stand on their own. Again, the style images are from unseen species. Not all cases do this well with the intermediate images, however. Here we start with one eye because the content and the style image have one eye visible while the target style of the owl has two. How do we solve that? Of course, with nuclear vision. Look. Very amusing. Loving this example, especially how impossible it seems because the owl is looking into the camera with both eyes while we see its backside below the head. If it looked to the side like the input content image, this might be a possibility, but with this contorted body posture, I am not so sure, so I'll give it a pass on this one. So there you go, transforming one known animal into a different one that the AI has never seen before. And it is already doing a more than formidable job at that. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.5200000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifahir, the research field"}, {"start": 5.5200000000000005, "end": 10.88, "text": " of image translation with the aid of learning algorithms has been on fire lately."}, {"start": 10.88, "end": 15.68, "text": " For instance, this earlier technique would look at a large number of animal faces and could"}, {"start": 15.68, "end": 22.28, "text": " interpolate between them or, in other words, blend one kind of dog into another breed."}, {"start": 22.28, "end": 28.28, "text": " But that's not all because it could even transform dogs into cats or create these glorious"}, {"start": 28.28, "end": 30.48, "text": " plump cats and cheetahs."}, {"start": 30.48, "end": 32.760000000000005, "text": " The results were absolutely stunning."}, {"start": 32.760000000000005, "end": 36.8, "text": " However, it would only work on the domains it was trained on."}, {"start": 36.8, "end": 41.68, "text": " In other words, it could only translate to and from species that it took the time to"}, {"start": 41.68, "end": 42.96, "text": " learn about."}, {"start": 42.96, "end": 46.400000000000006, "text": " This new method offers something really amazing."}, {"start": 46.400000000000006, "end": 52.08, "text": " It can handle multiple domains or multiple breeds, if you will, even ones that it hadn't"}, {"start": 52.08, "end": 53.8, "text": " seen previously."}, {"start": 53.8, "end": 58.120000000000005, "text": " That sounds flat out impossible, so let's have a look at some results."}, {"start": 58.12, "end": 64.0, "text": " This dog will be used as content, therefore the output should have a similar pose, but"}, {"start": 64.0, "end": 67.28, "text": " its breed has to be changed to this one."}, {"start": 67.28, "end": 69.44, "text": " But there is one little problem."}, {"start": 69.44, "end": 73.64, "text": " And that problem is that the AI has never seen this breed before."}, {"start": 73.64, "end": 79.03999999999999, "text": " This will be very challenging because we only see the head of the dog used for style."}, {"start": 79.03999999999999, "end": 81.88, "text": " Should the body of the dog also get curly hair?"}, {"start": 81.88, "end": 87.24, "text": " You only know if you know this particular dog breed, or if you're smart and can infer"}, {"start": 87.24, "end": 91.24, "text": " missing information by looking at other kinds of dogs."}, {"start": 91.24, "end": 93.75999999999999, "text": " Let's see the result."}, {"start": 93.75999999999999, "end": 94.75999999999999, "text": " Incredible."}, {"start": 94.75999999999999, "end": 100.08, "text": " The remnants of the leash also remain there in the output results."}, {"start": 100.08, "end": 105.47999999999999, "text": " It also did a nearly impeccable job with this bird, where again the style image is from"}, {"start": 105.47999999999999, "end": 107.36, "text": " a previously unseen breed."}, {"start": 107.36, "end": 112.91999999999999, "text": " Now, this is of course a remarkably difficult problem domain, translating into different"}, {"start": 112.92, "end": 118.32000000000001, "text": " kinds of animals that you know nothing about, apart from a tiny, cropped image, this would"}, {"start": 118.32000000000001, "end": 121.04, "text": " be quite a challenge even for a human."}, {"start": 121.04, "end": 126.12, "text": " However, this one is not the first technique to attempt to solve it, so let's see how"}, {"start": 126.12, "end": 128.88, "text": " it stacks up against a previous method."}, {"start": 128.88, "end": 133.48, "text": " This one is from just a year ago, and you will see in a moment how much this field has"}, {"start": 133.48, "end": 135.16, "text": " progressed since then."}, {"start": 135.16, "end": 140.68, "text": " For instance, in this output we get two dogs which seems to be a mix of the content and"}, {"start": 140.68, "end": 142.44, "text": " the style dog."}, {"start": 142.44, "end": 146.88, "text": " And while the new method still seems to have some structural issues, the dog type and"}, {"start": 146.88, "end": 149.16, "text": " the pose is indeed correct."}, {"start": 149.16, "end": 152.84, "text": " The rest of the results also appear to be significantly better."}, {"start": 152.84, "end": 154.07999999999998, "text": " But what do you think?"}, {"start": 154.07999999999998, "end": 155.8, "text": " Did you notice something weird?"}, {"start": 155.8, "end": 157.68, "text": " Let me know in the comments below."}, {"start": 157.68, "end": 161.32, "text": " And now, less transition into image interpolation."}, {"start": 161.32, "end": 165.32, "text": " This will be a touch more involved than previous interpolation efforts."}, {"start": 165.32, "end": 170.96, "text": " You see, in this previous paper we had a source and a target image, and the AI was asked"}, {"start": 170.96, "end": 174.32000000000002, "text": " to generate intermediate images between them."}, {"start": 174.32000000000002, "end": 175.48000000000002, "text": " Simple enough."}, {"start": 175.48000000000002, "end": 180.24, "text": " In this case, however, we have not two but three images as an input."}, {"start": 180.24, "end": 181.84, "text": " There will be a content image."}, {"start": 181.84, "end": 187.04000000000002, "text": " This will provide the high-level features, such as pose, and its style is going to transition"}, {"start": 187.04000000000002, "end": 189.08, "text": " from this to this."}, {"start": 189.08, "end": 194.84, "text": " The goal is that the content image remains intact while transforming one breed or species"}, {"start": 194.84, "end": 199.92000000000002, "text": " into another."}, {"start": 199.92, "end": 202.88, "text": " This particular example is one of my favorites."}, {"start": 202.88, "end": 208.2, "text": " Such a beautiful transition and surely not all, but many of the intermediate images could"}, {"start": 208.2, "end": 209.64, "text": " stand on their own."}, {"start": 209.64, "end": 213.64, "text": " Again, the style images are from unseen species."}, {"start": 213.64, "end": 217.72, "text": " Not all cases do this well with the intermediate images, however."}, {"start": 217.72, "end": 223.16, "text": " Here we start with one eye because the content and the style image have one eye visible"}, {"start": 223.16, "end": 226.95999999999998, "text": " while the target style of the owl has two."}, {"start": 226.95999999999998, "end": 228.44, "text": " How do we solve that?"}, {"start": 228.44, "end": 232.48, "text": " Of course, with nuclear vision."}, {"start": 232.48, "end": 234.8, "text": " Look."}, {"start": 234.8, "end": 235.8, "text": " Very amusing."}, {"start": 235.8, "end": 241.07999999999998, "text": " Loving this example, especially how impossible it seems because the owl is looking into the"}, {"start": 241.07999999999998, "end": 246.48, "text": " camera with both eyes while we see its backside below the head."}, {"start": 246.48, "end": 251.68, "text": " If it looked to the side like the input content image, this might be a possibility, but with"}, {"start": 251.68, "end": 257.0, "text": " this contorted body posture, I am not so sure, so I'll give it a pass on this one."}, {"start": 257.0, "end": 262.52, "text": " So there you go, transforming one known animal into a different one that the AI has never"}, {"start": 262.52, "end": 263.84, "text": " seen before."}, {"start": 263.84, "end": 267.68, "text": " And it is already doing a more than formidable job at that."}, {"start": 267.68, "end": 269.32, "text": " What a time to be alive."}, {"start": 269.32, "end": 272.76, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 272.76, "end": 278.72, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 278.72, "end": 286.48, "text": " They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto"}, {"start": 286.48, "end": 293.08000000000004, "text": " your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 293.08000000000004, "end": 298.64000000000004, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 298.64000000000004, "end": 305.0, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 305.0, "end": 306.84000000000003, "text": " workstations or servers."}, {"start": 306.84000000000003, "end": 312.8, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 312.8, "end": 313.8, "text": " today."}, {"start": 313.8, "end": 318.32, "text": " Thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 318.32, "end": 319.32, "text": " for you."}, {"start": 319.32, "end": 346.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MwCgvYtOLS0
TecoGAN: Super Resolution Extraordinaire!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their instrumentation of a previous paper is available here: https://app.wandb.ai/authors/alae/reports/Adversarial-Latent-Autoencoders--VmlldzoxNDA2MDY 📝 The paper "Learning Temporal Coherence via Self-Supervision for GAN-based Video Generation" is available here: https://ge.in.tum.de/publications/2019-tecogan-chu/ The legendary Wavelet Turbulence paper is available here: https://www.cs.cornell.edu/~tedkim/WTURB/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #enhance #superresolution
Dear Fellow Scholars, this is two-minute papers with Dr. Karojejona Ifehir. Let's talk about video super-resolution. The problem statement is simple, in goes a course video, the technique analyzes it, guesses what's missing, and out comes a detailed video. However, of course, reliably solving this problem is anything but simple. When learning based algorithms were not nearly as good as they are today, this problem was mainly handled by handcrafted techniques, but they had their limits. After all, if we don't see something too well, how could we tell what's there? And this is where new learning based methods, especially this one, come into play. This is a hard enough problem for even a still image, yet this technique is able to do it really well, even for videos. Let's have a look. The eye color for this character is blurry, but we see that it likely has a greenish, blueish color, and if we gave this problem to a human, this human would know that we are talking about the eye of another human and we know roughly what this should look like in reality. A human would also know that this must be a bridge and finish the picture. But, what about computers? The key is that if we have a learning algorithm that looks at the course and find version of the same video, it will hopefully learn what it takes to create a detailed video when given a poor one, which is exactly what happened here. As you see, we can give it very little information and it was able to add a stunning amount of detail to it. Now, of course, super resolution is a highly studied field these days, therefore it is a requirement for a good paper to compare to quite a few previous works. Let's see how it stacks up against those. Here, we are given a blocky image of this garment and this is the reference image that was coarsened to create this input. The reference was carefully hidden from the algorithms and only we have it. Previous works could add some details, but the results were nowhere near as good as the reference. So what about the new method? My goodness, it is very close to the real deal. Previous methods also had trouble resolving the details of this region, where the new method again, very close to reality. It is truly amazing how much this technique understands the work around us from just this training set of low and high resolution videos. Now, if you have a closer look at the author list, you see that Nils Terrey is also there. He is a fluid and smoke person, so I thought there had to be an angle here for smoke simulations. And yep, there we go. To even have a fighting chance of understanding the importance of this sequence, let's go back to Nils' earlier works, which is one of the best papers ever written, Wavelet Terbillens. That's a paper from 12 years ago. Now, some of the more seasoned fellow scholars among you know that I bring this paper up every chance I get, but especially now that it connects to this work we are looking at. You see, Wavelet Terbillens was an algorithm that could take a coarse smoke simulation after it has been created and added find it test with. In fact, so many fine details that creating the equivalently high resolution simulation would have been near impossible at the time. However, it did not work with images, it required knowledge about the inner workings of the simulator. For instance, it would need to know about the velocities and pressures at different points in this simulation. Now, this new method can do something very similar and all it does is just look at the image itself and improve it without even looking into the simulation data. Even though the flaws in the output are quite clear, the fact that it can add find details to a rapidly moving smoke plume is an incredible feat. If you look at the comparison against CycleGan, a technique from just three years ago, this is just a few more papers down the line and you see that this has improved significantly. And the new one is also more careful with temporal coherence or, in other words, there is no flickering arising from solving the adjacent frames in the video differently. Very good. And if we look a few more papers down the line, we may just get a learning based algorithm that does so well at this task that we would be able to rewatch any old footage in super high quality. What a time to be alive! What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojejona Ifehir."}, {"start": 4.8, "end": 7.5200000000000005, "text": " Let's talk about video super-resolution."}, {"start": 7.5200000000000005, "end": 13.48, "text": " The problem statement is simple, in goes a course video, the technique analyzes it, guesses"}, {"start": 13.48, "end": 17.28, "text": " what's missing, and out comes a detailed video."}, {"start": 17.28, "end": 22.16, "text": " However, of course, reliably solving this problem is anything but simple."}, {"start": 22.16, "end": 26.6, "text": " When learning based algorithms were not nearly as good as they are today, this problem was"}, {"start": 26.6, "end": 31.040000000000003, "text": " mainly handled by handcrafted techniques, but they had their limits."}, {"start": 31.040000000000003, "end": 35.68, "text": " After all, if we don't see something too well, how could we tell what's there?"}, {"start": 35.68, "end": 40.64, "text": " And this is where new learning based methods, especially this one, come into play."}, {"start": 40.64, "end": 45.72, "text": " This is a hard enough problem for even a still image, yet this technique is able to do"}, {"start": 45.72, "end": 48.36, "text": " it really well, even for videos."}, {"start": 48.36, "end": 49.84, "text": " Let's have a look."}, {"start": 49.84, "end": 54.8, "text": " The eye color for this character is blurry, but we see that it likely has a greenish,"}, {"start": 54.8, "end": 59.64, "text": " blueish color, and if we gave this problem to a human, this human would know that we"}, {"start": 59.64, "end": 64.08, "text": " are talking about the eye of another human and we know roughly what this should look"}, {"start": 64.08, "end": 65.72, "text": " like in reality."}, {"start": 65.72, "end": 70.36, "text": " A human would also know that this must be a bridge and finish the picture."}, {"start": 70.36, "end": 72.64, "text": " But, what about computers?"}, {"start": 72.64, "end": 77.84, "text": " The key is that if we have a learning algorithm that looks at the course and find version of"}, {"start": 77.84, "end": 82.16, "text": " the same video, it will hopefully learn what it takes to create a detailed video when"}, {"start": 82.16, "end": 85.72, "text": " given a poor one, which is exactly what happened here."}, {"start": 85.72, "end": 91.39999999999999, "text": " As you see, we can give it very little information and it was able to add a stunning amount of"}, {"start": 91.39999999999999, "end": 92.39999999999999, "text": " detail to it."}, {"start": 92.39999999999999, "end": 97.72, "text": " Now, of course, super resolution is a highly studied field these days, therefore it is a"}, {"start": 97.72, "end": 102.6, "text": " requirement for a good paper to compare to quite a few previous works."}, {"start": 102.6, "end": 105.19999999999999, "text": " Let's see how it stacks up against those."}, {"start": 105.19999999999999, "end": 110.6, "text": " Here, we are given a blocky image of this garment and this is the reference image that"}, {"start": 110.6, "end": 113.32, "text": " was coarsened to create this input."}, {"start": 113.32, "end": 118.11999999999999, "text": " The reference was carefully hidden from the algorithms and only we have it."}, {"start": 118.11999999999999, "end": 122.72, "text": " Previous works could add some details, but the results were nowhere near as good as the"}, {"start": 122.72, "end": 123.72, "text": " reference."}, {"start": 123.72, "end": 126.36, "text": " So what about the new method?"}, {"start": 126.36, "end": 130.79999999999998, "text": " My goodness, it is very close to the real deal."}, {"start": 130.79999999999998, "end": 135.51999999999998, "text": " Previous methods also had trouble resolving the details of this region, where the new method"}, {"start": 135.51999999999998, "end": 138.24, "text": " again, very close to reality."}, {"start": 138.24, "end": 142.88, "text": " It is truly amazing how much this technique understands the work around us from just"}, {"start": 142.88, "end": 147.20000000000002, "text": " this training set of low and high resolution videos."}, {"start": 147.20000000000002, "end": 153.56, "text": " Now, if you have a closer look at the author list, you see that Nils Terrey is also there."}, {"start": 153.56, "end": 159.72, "text": " He is a fluid and smoke person, so I thought there had to be an angle here for smoke simulations."}, {"start": 159.72, "end": 162.12, "text": " And yep, there we go."}, {"start": 162.12, "end": 166.84, "text": " To even have a fighting chance of understanding the importance of this sequence, let's go"}, {"start": 166.84, "end": 171.92000000000002, "text": " back to Nils' earlier works, which is one of the best papers ever written, Wavelet"}, {"start": 171.92000000000002, "end": 172.92000000000002, "text": " Terbillens."}, {"start": 172.92000000000002, "end": 175.76, "text": " That's a paper from 12 years ago."}, {"start": 175.76, "end": 180.48000000000002, "text": " Now, some of the more seasoned fellow scholars among you know that I bring this paper up"}, {"start": 180.48000000000002, "end": 186.0, "text": " every chance I get, but especially now that it connects to this work we are looking at."}, {"start": 186.0, "end": 191.44, "text": " You see, Wavelet Terbillens was an algorithm that could take a coarse smoke simulation after"}, {"start": 191.44, "end": 195.68, "text": " it has been created and added find it test with."}, {"start": 195.68, "end": 200.64000000000001, "text": " In fact, so many fine details that creating the equivalently high resolution simulation"}, {"start": 200.64000000000001, "end": 203.56, "text": " would have been near impossible at the time."}, {"start": 203.56, "end": 209.96, "text": " However, it did not work with images, it required knowledge about the inner workings of the simulator."}, {"start": 209.96, "end": 214.32, "text": " For instance, it would need to know about the velocities and pressures at different points"}, {"start": 214.32, "end": 215.64000000000001, "text": " in this simulation."}, {"start": 215.64000000000001, "end": 222.20000000000002, "text": " Now, this new method can do something very similar and all it does is just look at the image"}, {"start": 222.2, "end": 227.72, "text": " itself and improve it without even looking into the simulation data."}, {"start": 227.72, "end": 232.64, "text": " Even though the flaws in the output are quite clear, the fact that it can add find details"}, {"start": 232.64, "end": 236.44, "text": " to a rapidly moving smoke plume is an incredible feat."}, {"start": 236.44, "end": 241.51999999999998, "text": " If you look at the comparison against CycleGan, a technique from just three years ago, this"}, {"start": 241.51999999999998, "end": 247.16, "text": " is just a few more papers down the line and you see that this has improved significantly."}, {"start": 247.16, "end": 252.96, "text": " And the new one is also more careful with temporal coherence or, in other words, there is no flickering"}, {"start": 252.96, "end": 257.56, "text": " arising from solving the adjacent frames in the video differently."}, {"start": 257.56, "end": 258.56, "text": " Very good."}, {"start": 258.56, "end": 263.36, "text": " And if we look a few more papers down the line, we may just get a learning based algorithm"}, {"start": 263.36, "end": 268.8, "text": " that does so well at this task that we would be able to rewatch any old footage in super"}, {"start": 268.8, "end": 269.8, "text": " high quality."}, {"start": 269.8, "end": 271.8, "text": " What a time to be alive!"}, {"start": 271.8, "end": 276.96, "text": " What you see here is an instrumentation for a previous paper that we covered in this series"}, {"start": 276.96, "end": 279.44, "text": " which was made by weights and biases."}, {"start": 279.44, "end": 284.79999999999995, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 284.79999999999995, "end": 289.47999999999996, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 289.47999999999996, "end": 294.28, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 294.28, "end": 301.0, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 301.0, "end": 305.91999999999996, "text": " And the best part is that if you have an open source, academic or personal project,"}, {"start": 305.92, "end": 308.04, "text": " you can use their tools for free."}, {"start": 308.04, "end": 310.56, "text": " It really is as good as it gets."}, {"start": 310.56, "end": 316.68, "text": " Make sure to visit them through wnb.com slash papers or click the link in the video description"}, {"start": 316.68, "end": 320.0, "text": " to start tracking your experiments in 5 minutes."}, {"start": 320.0, "end": 324.6, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 324.6, "end": 325.92, "text": " better videos for you."}, {"start": 325.92, "end": 335.92, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=qeZMKgKJLX4
This AI Removes Shadows From Your Photos! 🌒
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their post on how to train distributed models is available here: https://app.wandb.ai/sayakpaul/tensorflow-multi-gpu-dist/reports/Distributed-training-in-tf.keras-with-W%26B--Vmlldzo3NzUyNA 📝 The paper "Portrait Shadow Manipulation" is available here: https://people.eecs.berkeley.edu/~cecilia77/project-pages/portrait 📝 Our paper with Activision Blizzard on subsurface scattering is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karojona Ifehir. When we look at the cover page of a magazine, we often see lots of well-made, but also idealized photos of people. Idealized here means that the photographer made them in a studio where they can add or remove light sources and move them around to bring out the best from their models. But most photos are not made in the studio, they are made out there in the wild where the lighting is what it is and we can't control it too much. So with that today, our question is what if we could change the lighting after the photo has been made? This work proposes a cool technique to perform exactly that by enabling us to edit the shadows on a portrait photo that we would normally think of deleting. Many of these have to do with the presence of shadows and you can see here that we can really edit these after the photo has been taken. However, before we start taking a closer look at the editing process, we have to note that there are different kinds of shadows. One, there are shadows cast on us by external objects, let's call them foreign shadows and there is self-shadowing which comes from the models own facial features. Let's call those facial shadows. So why divide them into two classes? For example, because we typically seek to remove foreign shadows and edit facial shadows. The removal part can be done with a learning algorithm provided that we can teach it with a lot of training data. Let's think about ways to synthesize such a large data set. Let's start with the foreign shadows. We need image pairs of test subjects with and without shadows to have a neural network learn about their relations. This removing shadows is difficult without further interfering with the image, the authors opted to do it the other way around. In other words, they take a clean photo of the subject, that's the one without the shadows and then add shadows to it algorithmically. Very cool. And the results are not bad at all and get this, they even accounted for subsurface scattering which is the scattering of light under our skin. That makes a great deal of a difference. This is a reference from a paper we wrote with scientists at the University of Zaragoza and the Activision Blizzard Company to add this beautiful effect to their games. Here is a shadow edge without subsurface scattering, quite dark. And with subsurface scattering, you see this beautiful glowing effect. Subsurface scattering indeed makes a great deal of difference around hard shadow edges, so huge thumbs up for the authors for including an approximation of that. However, the synthesized photos are still a little suspect. We can still tell that they are synthesized. And that is kind of the point. Our question is can a neural network still learn the difference between a clean and the shadowy photo despite all this? And as you see, the problem is not easy. These methods did not do too well on these examples when you compare them to the reference solution. And let's see this new method. Wow, I can hardly believe my eyes, nearly perfect, and it did learn all this on not real but synthetic images. And believe it or not, this was only the simpler part. Now comes the hard part. Let's look at how well it performs at editing the facial shadows. We can pretend to edit both the size and the intensity of these light sources. The goal is to have a little more control over the shadows in these photos, but whatever we do with them, the outputs still have to remain realistic. Here are the before and after results. The facial shadows have been weakened, and depending on our artistic choices, we can also soften the image a great deal. Absolutely amazing. As a result, we now have a two-step algorithm that first removes foreign shadows and is able to soften the remainder of the facial shadows creating much more usable portrait photos of our friends and all this after the photo has been made. What a time to be alive. Now of course, even though this technique convincingly beats previous works, it is still not perfect. The algorithm may fail to remove some highly detailed shadows. You can see how the shadow of the hair remains in the output. In this other output, the hair shadows are handled a little better. There is some dampening, but the symmetric nature of the facial shadows here put the output results in an interesting no-man's land where the opacity of the shadow has been decreased, but the result looks unnatural. I can't wait to see how this method will be improved two more papers down the line. I will be here to report on it to you, so make sure to subscribe and hit the bell icon to not miss out on that. This episode has been supported by weights and biases. In this post, they show you how to use their system to train distributed models with cares. You get all the required source code to make this happen, so make sure to click the link in the video description. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karojona Ifehir."}, {"start": 4.8, "end": 10.96, "text": " When we look at the cover page of a magazine, we often see lots of well-made, but also idealized"}, {"start": 10.96, "end": 12.88, "text": " photos of people."}, {"start": 12.88, "end": 17.52, "text": " Idealized here means that the photographer made them in a studio where they can add or"}, {"start": 17.52, "end": 22.52, "text": " remove light sources and move them around to bring out the best from their models."}, {"start": 22.52, "end": 27.28, "text": " But most photos are not made in the studio, they are made out there in the wild where the"}, {"start": 27.28, "end": 31.040000000000003, "text": " lighting is what it is and we can't control it too much."}, {"start": 31.040000000000003, "end": 36.96, "text": " So with that today, our question is what if we could change the lighting after the photo"}, {"start": 36.96, "end": 38.480000000000004, "text": " has been made?"}, {"start": 38.480000000000004, "end": 43.58, "text": " This work proposes a cool technique to perform exactly that by enabling us to edit the"}, {"start": 43.58, "end": 48.08, "text": " shadows on a portrait photo that we would normally think of deleting."}, {"start": 48.08, "end": 52.52, "text": " Many of these have to do with the presence of shadows and you can see here that we can"}, {"start": 52.52, "end": 56.120000000000005, "text": " really edit these after the photo has been taken."}, {"start": 56.12, "end": 61.32, "text": " However, before we start taking a closer look at the editing process, we have to note that"}, {"start": 61.32, "end": 64.03999999999999, "text": " there are different kinds of shadows."}, {"start": 64.03999999999999, "end": 69.8, "text": " One, there are shadows cast on us by external objects, let's call them foreign shadows"}, {"start": 69.8, "end": 74.88, "text": " and there is self-shadowing which comes from the models own facial features."}, {"start": 74.88, "end": 77.64, "text": " Let's call those facial shadows."}, {"start": 77.64, "end": 80.8, "text": " So why divide them into two classes?"}, {"start": 80.8, "end": 86.67999999999999, "text": " For example, because we typically seek to remove foreign shadows and edit facial shadows."}, {"start": 86.67999999999999, "end": 91.67999999999999, "text": " The removal part can be done with a learning algorithm provided that we can teach it with"}, {"start": 91.67999999999999, "end": 93.6, "text": " a lot of training data."}, {"start": 93.6, "end": 97.47999999999999, "text": " Let's think about ways to synthesize such a large data set."}, {"start": 97.47999999999999, "end": 99.44, "text": " Let's start with the foreign shadows."}, {"start": 99.44, "end": 104.88, "text": " We need image pairs of test subjects with and without shadows to have a neural network"}, {"start": 104.88, "end": 107.12, "text": " learn about their relations."}, {"start": 107.12, "end": 111.96000000000001, "text": " This removing shadows is difficult without further interfering with the image, the authors"}, {"start": 111.96000000000001, "end": 114.88000000000001, "text": " opted to do it the other way around."}, {"start": 114.88000000000001, "end": 119.64, "text": " In other words, they take a clean photo of the subject, that's the one without the shadows"}, {"start": 119.64, "end": 123.76, "text": " and then add shadows to it algorithmically."}, {"start": 123.76, "end": 125.08000000000001, "text": " Very cool."}, {"start": 125.08000000000001, "end": 130.68, "text": " And the results are not bad at all and get this, they even accounted for subsurface scattering"}, {"start": 130.68, "end": 133.84, "text": " which is the scattering of light under our skin."}, {"start": 133.84, "end": 135.96, "text": " That makes a great deal of a difference."}, {"start": 135.96, "end": 140.92000000000002, "text": " This is a reference from a paper we wrote with scientists at the University of Zaragoza"}, {"start": 140.92000000000002, "end": 146.08, "text": " and the Activision Blizzard Company to add this beautiful effect to their games."}, {"start": 146.08, "end": 151.52, "text": " Here is a shadow edge without subsurface scattering, quite dark."}, {"start": 151.52, "end": 157.32, "text": " And with subsurface scattering, you see this beautiful glowing effect."}, {"start": 157.32, "end": 161.76000000000002, "text": " Subsurface scattering indeed makes a great deal of difference around hard shadow edges,"}, {"start": 161.76, "end": 166.23999999999998, "text": " so huge thumbs up for the authors for including an approximation of that."}, {"start": 166.23999999999998, "end": 170.84, "text": " However, the synthesized photos are still a little suspect."}, {"start": 170.84, "end": 173.51999999999998, "text": " We can still tell that they are synthesized."}, {"start": 173.51999999999998, "end": 175.64, "text": " And that is kind of the point."}, {"start": 175.64, "end": 180.68, "text": " Our question is can a neural network still learn the difference between a clean and the"}, {"start": 180.68, "end": 183.64, "text": " shadowy photo despite all this?"}, {"start": 183.64, "end": 187.04, "text": " And as you see, the problem is not easy."}, {"start": 187.04, "end": 191.48, "text": " These methods did not do too well on these examples when you compare them to the reference"}, {"start": 191.48, "end": 193.68, "text": " solution."}, {"start": 193.68, "end": 195.88, "text": " And let's see this new method."}, {"start": 195.88, "end": 203.39999999999998, "text": " Wow, I can hardly believe my eyes, nearly perfect, and it did learn all this on not real"}, {"start": 203.39999999999998, "end": 205.84, "text": " but synthetic images."}, {"start": 205.84, "end": 208.88, "text": " And believe it or not, this was only the simpler part."}, {"start": 208.88, "end": 210.88, "text": " Now comes the hard part."}, {"start": 210.88, "end": 214.95999999999998, "text": " Let's look at how well it performs at editing the facial shadows."}, {"start": 214.96, "end": 220.84, "text": " We can pretend to edit both the size and the intensity of these light sources."}, {"start": 220.84, "end": 226.0, "text": " The goal is to have a little more control over the shadows in these photos, but whatever"}, {"start": 226.0, "end": 230.32, "text": " we do with them, the outputs still have to remain realistic."}, {"start": 230.32, "end": 233.04000000000002, "text": " Here are the before and after results."}, {"start": 233.04000000000002, "end": 238.16, "text": " The facial shadows have been weakened, and depending on our artistic choices, we can also"}, {"start": 238.16, "end": 241.0, "text": " soften the image a great deal."}, {"start": 241.0, "end": 242.36, "text": " Absolutely amazing."}, {"start": 242.36, "end": 247.8, "text": " As a result, we now have a two-step algorithm that first removes foreign shadows and is"}, {"start": 247.8, "end": 253.16000000000003, "text": " able to soften the remainder of the facial shadows creating much more usable portrait photos"}, {"start": 253.16000000000003, "end": 257.92, "text": " of our friends and all this after the photo has been made."}, {"start": 257.92, "end": 259.64, "text": " What a time to be alive."}, {"start": 259.64, "end": 264.68, "text": " Now of course, even though this technique convincingly beats previous works, it is still"}, {"start": 264.68, "end": 265.88, "text": " not perfect."}, {"start": 265.88, "end": 269.40000000000003, "text": " The algorithm may fail to remove some highly detailed shadows."}, {"start": 269.4, "end": 274.56, "text": " You can see how the shadow of the hair remains in the output."}, {"start": 274.56, "end": 278.03999999999996, "text": " In this other output, the hair shadows are handled a little better."}, {"start": 278.03999999999996, "end": 284.08, "text": " There is some dampening, but the symmetric nature of the facial shadows here put the output"}, {"start": 284.08, "end": 289.59999999999997, "text": " results in an interesting no-man's land where the opacity of the shadow has been decreased,"}, {"start": 289.59999999999997, "end": 291.59999999999997, "text": " but the result looks unnatural."}, {"start": 291.59999999999997, "end": 296.32, "text": " I can't wait to see how this method will be improved two more papers down the line."}, {"start": 296.32, "end": 300.92, "text": " I will be here to report on it to you, so make sure to subscribe and hit the bell icon"}, {"start": 300.92, "end": 302.56, "text": " to not miss out on that."}, {"start": 302.56, "end": 305.71999999999997, "text": " This episode has been supported by weights and biases."}, {"start": 305.71999999999997, "end": 310.28, "text": " In this post, they show you how to use their system to train distributed models with"}, {"start": 310.28, "end": 311.28, "text": " cares."}, {"start": 311.28, "end": 315.36, "text": " You get all the required source code to make this happen, so make sure to click the link"}, {"start": 315.36, "end": 317.03999999999996, "text": " in the video description."}, {"start": 317.03999999999996, "end": 321.6, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 321.6, "end": 326.40000000000003, "text": " Their system is designed to save you a ton of time and money, and it is actively used"}, {"start": 326.40000000000003, "end": 333.16, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 333.16, "end": 338.08000000000004, "text": " And the best part is that if you have an open source, academic, or personal project,"}, {"start": 338.08000000000004, "end": 340.20000000000005, "text": " you can use their tools for free."}, {"start": 340.20000000000005, "end": 342.72, "text": " It really is as good as it gets."}, {"start": 342.72, "end": 348.84000000000003, "text": " Make sure to visit them through wnb.com slash papers, or click the link in the video description"}, {"start": 348.84, "end": 352.15999999999997, "text": " to start tracking your experiments in 5 minutes."}, {"start": 352.15999999999997, "end": 356.71999999999997, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 356.71999999999997, "end": 358.15999999999997, "text": " better videos for you."}, {"start": 358.16, "end": 387.72, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=SxIkQt04WCo
How Can We Simulate Water Droplets? 🌊
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 🎬Our Instagram page with the slow-motion videos is available here: https://www.instagram.com/twominutepapers/ 📝 The paper "Codimensional Surface Tension Flow using Moving-Least-SquaresParticles" is available here: https://web.stanford.edu/~yxjin/pdf/codim.pdf https://web.stanford.edu/~yxjin/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajola Ifehir, a computer graphics paper from approximately three years ago was able to simulate the motion of these bubbles and even these beautiful collision events between them in a matter of milliseconds. This was approximately 300 episodes ago and in this series we always say that two more papers down the line and this will be improved significantly. So now, once again, here goes another one of those two minute papers, moments of truth. Now three years later, let's see how this field evolved. Let's fire up this new technique that will now simulate the evolution of two cube shaped droplets for us. In reality, modern nature would make sure that the surface area of these droplets is minimized. Let's see. Yes, droplets form immediately so the simulation program understands surface tension and the collision event is also simulated beautifully. A+. However, this was possible with previous methods, for instance a paper by the name surface-only liquids could also pull it off so what's new here? Well, let's look under the hood and find out. Oh yes, this is different. You see, normally if we do this breakdown, we get triangle meshes. This is typically how these surfaces are represented, but I don't see any meshes here, I see particles. Great, but what does this enable us to do? Look here. If we break down the simulation of this beautiful fluid polygon, we see that there is not only one kind of particle here, there are three kinds. With light blue, we see sheet particles, the yellow ones are filament particles, and if we look inside, with dark blue here, you see volume particles. With these building blocks and the proposed new simulation method, we can create much more sophisticated surface tension related phenomena. So let's do exactly that. For instance, here you see soap membranes stretching due to wind flows. They get separated, lots of topological changes take place, and the algorithm handles it correctly. In another example, this soap bubble has been initialized with a hole, and you can see it cascading through the entire surface. Beautiful work. And after we finish the simulation of these fluid chains, we can look under the hood and see how the algorithm thinks about this piece of fluid. Once again, with dark blue, we have the particles that represent the inner volume of the water chains, there is a thin layer of sheet particles holding them together. What a clean and beautiful visualization. So how much do we have to wait to get these results? A bit. Simulating this fluid chain example took roughly 60 seconds per frame. This droplet on a plane example runs approximately 10 times faster than that, it only needs 6.5 seconds for each frame. This was one of the cheaper scenes in the paper, and you may be wondering which one was the most expensive. This water bell took almost 2 minutes for each frame. And here, when you see this breakdown from the particle color coding, you know exactly what we are looking at. Since part of this algorithm runs on your processor, and a different part on your graphics card, there is plenty of room for improvements in terms of the computation time for a follow-up paper. And I cannot wait to see these beautiful simulations in real time 2 more papers down the line. What a time to be alive. I also couldn't resist creating a slow motion version of some of these videos if this is something that you wish to see, make sure to click our Instagram page link in the video description for more. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Linode gives you full back and access to your server, which is your step up to powerful, fast, fully configurable cloud computing. Linode also has one click apps that streamline your ability to deploy websites, personal VPNs, game servers, and more. If you need something as small as a personal online portfolio, Linode has your back, and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers, or click the link in the video description and give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajola Ifehir, a computer graphics"}, {"start": 5.4, "end": 11.08, "text": " paper from approximately three years ago was able to simulate the motion of these bubbles"}, {"start": 11.08, "end": 16.64, "text": " and even these beautiful collision events between them in a matter of milliseconds."}, {"start": 16.64, "end": 21.92, "text": " This was approximately 300 episodes ago and in this series we always say that two more"}, {"start": 21.92, "end": 25.92, "text": " papers down the line and this will be improved significantly."}, {"start": 25.92, "end": 32.0, "text": " So now, once again, here goes another one of those two minute papers, moments of truth."}, {"start": 32.0, "end": 36.56, "text": " Now three years later, let's see how this field evolved."}, {"start": 36.56, "end": 41.24, "text": " Let's fire up this new technique that will now simulate the evolution of two cube shaped"}, {"start": 41.24, "end": 42.92, "text": " droplets for us."}, {"start": 42.92, "end": 48.96, "text": " In reality, modern nature would make sure that the surface area of these droplets is minimized."}, {"start": 48.96, "end": 49.96, "text": " Let's see."}, {"start": 49.96, "end": 55.72, "text": " Yes, droplets form immediately so the simulation program understands surface tension"}, {"start": 55.72, "end": 60.04, "text": " and the collision event is also simulated beautifully."}, {"start": 60.04, "end": 61.04, "text": " A+."}, {"start": 61.04, "end": 67.16, "text": " However, this was possible with previous methods, for instance a paper by the name surface-only"}, {"start": 67.16, "end": 71.08, "text": " liquids could also pull it off so what's new here?"}, {"start": 71.08, "end": 74.36, "text": " Well, let's look under the hood and find out."}, {"start": 74.36, "end": 77.12, "text": " Oh yes, this is different."}, {"start": 77.12, "end": 81.88, "text": " You see, normally if we do this breakdown, we get triangle meshes."}, {"start": 81.88, "end": 87.6, "text": " This is typically how these surfaces are represented, but I don't see any meshes here, I see"}, {"start": 87.6, "end": 88.6, "text": " particles."}, {"start": 88.6, "end": 92.24, "text": " Great, but what does this enable us to do?"}, {"start": 92.24, "end": 93.24, "text": " Look here."}, {"start": 93.24, "end": 97.64, "text": " If we break down the simulation of this beautiful fluid polygon, we see that there is not"}, {"start": 97.64, "end": 102.28, "text": " only one kind of particle here, there are three kinds."}, {"start": 102.28, "end": 108.67999999999999, "text": " With light blue, we see sheet particles, the yellow ones are filament particles, and if"}, {"start": 108.68, "end": 114.04, "text": " we look inside, with dark blue here, you see volume particles."}, {"start": 114.04, "end": 118.12, "text": " With these building blocks and the proposed new simulation method, we can create much"}, {"start": 118.12, "end": 121.96000000000001, "text": " more sophisticated surface tension related phenomena."}, {"start": 121.96000000000001, "end": 124.48, "text": " So let's do exactly that."}, {"start": 124.48, "end": 129.68, "text": " For instance, here you see soap membranes stretching due to wind flows."}, {"start": 129.68, "end": 135.36, "text": " They get separated, lots of topological changes take place, and the algorithm handles it"}, {"start": 135.36, "end": 137.32, "text": " correctly."}, {"start": 137.32, "end": 141.95999999999998, "text": " In another example, this soap bubble has been initialized with a hole, and you can see"}, {"start": 141.95999999999998, "end": 149.4, "text": " it cascading through the entire surface."}, {"start": 149.4, "end": 152.04, "text": " Beautiful work."}, {"start": 152.04, "end": 156.72, "text": " And after we finish the simulation of these fluid chains, we can look under the hood and"}, {"start": 156.72, "end": 160.79999999999998, "text": " see how the algorithm thinks about this piece of fluid."}, {"start": 160.79999999999998, "end": 165.6, "text": " Once again, with dark blue, we have the particles that represent the inner volume of the water"}, {"start": 165.6, "end": 170.6, "text": " chains, there is a thin layer of sheet particles holding them together."}, {"start": 170.6, "end": 173.84, "text": " What a clean and beautiful visualization."}, {"start": 173.84, "end": 177.32, "text": " So how much do we have to wait to get these results?"}, {"start": 177.32, "end": 178.32, "text": " A bit."}, {"start": 178.32, "end": 183.16, "text": " Simulating this fluid chain example took roughly 60 seconds per frame."}, {"start": 183.16, "end": 189.88, "text": " This droplet on a plane example runs approximately 10 times faster than that, it only needs 6.5"}, {"start": 189.88, "end": 191.79999999999998, "text": " seconds for each frame."}, {"start": 191.8, "end": 196.24, "text": " This was one of the cheaper scenes in the paper, and you may be wondering which one was the"}, {"start": 196.24, "end": 198.04000000000002, "text": " most expensive."}, {"start": 198.04000000000002, "end": 203.76000000000002, "text": " This water bell took almost 2 minutes for each frame."}, {"start": 203.76000000000002, "end": 208.56, "text": " And here, when you see this breakdown from the particle color coding, you know exactly"}, {"start": 208.56, "end": 210.16000000000003, "text": " what we are looking at."}, {"start": 210.16000000000003, "end": 215.28, "text": " Since part of this algorithm runs on your processor, and a different part on your graphics card,"}, {"start": 215.28, "end": 219.8, "text": " there is plenty of room for improvements in terms of the computation time for a follow-up"}, {"start": 219.8, "end": 220.8, "text": " paper."}, {"start": 220.8, "end": 226.64000000000001, "text": " And I cannot wait to see these beautiful simulations in real time 2 more papers down the line."}, {"start": 226.64000000000001, "end": 228.36, "text": " What a time to be alive."}, {"start": 228.36, "end": 233.16000000000003, "text": " I also couldn't resist creating a slow motion version of some of these videos if this"}, {"start": 233.16000000000003, "end": 237.56, "text": " is something that you wish to see, make sure to click our Instagram page link in the"}, {"start": 237.56, "end": 239.36, "text": " video description for more."}, {"start": 239.36, "end": 241.76000000000002, "text": " This episode has been supported by Linode."}, {"start": 241.76000000000002, "end": 245.48000000000002, "text": " Linode is the world's largest independent cloud computing provider."}, {"start": 245.48000000000002, "end": 250.48000000000002, "text": " Linode gives you full back and access to your server, which is your step up to powerful,"}, {"start": 250.48, "end": 253.23999999999998, "text": " fast, fully configurable cloud computing."}, {"start": 253.23999999999998, "end": 258.2, "text": " Linode also has one click apps that streamline your ability to deploy websites, personal"}, {"start": 258.2, "end": 261.15999999999997, "text": " VPNs, game servers, and more."}, {"start": 261.15999999999997, "end": 266.24, "text": " If you need something as small as a personal online portfolio, Linode has your back, and"}, {"start": 266.24, "end": 271.59999999999997, "text": " if you need to manage tons of clients' websites and reliably serve them to millions of visitors,"}, {"start": 271.59999999999997, "end": 273.32, "text": " Linode can do that too."}, {"start": 273.32, "end": 280.28, "text": " What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made"}, {"start": 280.28, "end": 285.0, "text": " for AI, scientific computing, and computer graphics projects."}, {"start": 285.0, "end": 289.91999999999996, "text": " If only I had access to a tool like this while I was working on my last few papers."}, {"start": 289.91999999999996, "end": 296.28, "text": " To receive $20 in credit on your new Linode account, visit linode.com slash papers, or"}, {"start": 296.28, "end": 300.03999999999996, "text": " click the link in the video description and give it a try today."}, {"start": 300.03999999999996, "end": 304.88, "text": " Our thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 304.88, "end": 310.88, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=u4HpryLU-VI
From Video Games To Reality…With Just One AI!
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "World-Consistent Video-to-Video Synthesis" is available here: https://nvlabs.github.io/wc-vid2vid/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajon Aifahir, approximately three years ago, a magical learning-based algorithm appeared that was capable of translating a photorealistic image of a zebra into a horse or the other way around could transform apples into oranges and more. Later, it became possible to do this even without the presence of a photorealistic image where all we needed was a segmentation map. This segmentation map provides labels on what should go where, for instance, this should be the road, here will be trees, traffic signs, other vehicles and so on. And the output was a hopefully photorealistic video and you can see here that the results were absolutely jaw-dropping. However, look, as time goes by, the backside of the car morphs and warps over time creating unrealistic results that are inconsistent even on the short term. In other words, things change around from second to second and the AI does not appear to remember what it did just a moment ago. This kind of consistency was solved surprisingly well in a follow-up paper from a video in which an AI would look at the footage of a video game, for instance, Pac-Man for approximately 120 hours, we could shut down the video game and the AI would understand the rules so well that it could recreate the game that we could even play with. It had memory and used it well and therefore it could enforce a notion of world consistency or, in other words, if we return to a state of the game that we visited before, it will remember to present us with very similar information. So the question naturally arises would it be possible to create a photorealistic video from the segmentation maps that is also consistent? And in today's paper, researchers at MVIDIA propose a new technique that requests some additional information, for instance, a depth map that provides a little more information on how far different parts of the image are from the camera. Much like the Pac-Man paper, this also has memory and I wonder if it is able to use it as well as that one did. Let's test it out. This previous work is currently looking at a man with a red shirt, with slowly look away, this regard the warping and when we go back, hey, do you see what I see here? The shirt became white. This is not because the person is one of those artists who can change their clothes in less than a second, but because this older technique did not have a consistent internal model of the world. Now, let's see the new one. Once again, we start with the red shirt, look away, and then, yes, same red to blue gradient. Excellent. So it appears that this new technique also reuses information from previous frames efficiently. It is finally able to create a consistent video with much less morphing and warping and even better. We have these advantages, consistency property, where if you look at something that we looked at before, we will see very similar information there. But there is more. Additionally, it can also generate scenes from new viewpoints, which we also refer to as neural rendering. And as you see, the two viewpoints show similar objects so the consistency property holds here too. And now, hold onto your papers because we do not necessarily have to produce these semantic maps ourselves. We can let the machines do all the work by firing up a video game that we like, request that the different object classes are colored differently and get this input for free. And then, the technique generated a photorealistic video from the game graphics. Absolutely amazing. Now note that it is not perfect. For instance, it has a different notion of time as the clouds are changing in the background rapidly. And look, at the end of the sequence, we get back to our starting point and the first frame that we saw is very similar to the last one. The consistency works here too. Very good. I have no doubt that two more papers down the line and this will be even better. And for now, we can create consistent photorealistic videos even if all we have is freely obtained video game data. What a time to be alive. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdalabs.com slash papers and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.5, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajon Aifahir, approximately three"}, {"start": 5.5, "end": 11.46, "text": " years ago, a magical learning-based algorithm appeared that was capable of translating a"}, {"start": 11.46, "end": 17.68, "text": " photorealistic image of a zebra into a horse or the other way around could transform"}, {"start": 17.68, "end": 20.34, "text": " apples into oranges and more."}, {"start": 20.34, "end": 25.96, "text": " Later, it became possible to do this even without the presence of a photorealistic image"}, {"start": 25.96, "end": 29.0, "text": " where all we needed was a segmentation map."}, {"start": 29.0, "end": 33.76, "text": " This segmentation map provides labels on what should go where, for instance, this should"}, {"start": 33.76, "end": 39.78, "text": " be the road, here will be trees, traffic signs, other vehicles and so on."}, {"start": 39.78, "end": 44.92, "text": " And the output was a hopefully photorealistic video and you can see here that the results"}, {"start": 44.92, "end": 47.56, "text": " were absolutely jaw-dropping."}, {"start": 47.56, "end": 54.760000000000005, "text": " However, look, as time goes by, the backside of the car morphs and warps over time creating"}, {"start": 54.76, "end": 59.379999999999995, "text": " unrealistic results that are inconsistent even on the short term."}, {"start": 59.379999999999995, "end": 64.52, "text": " In other words, things change around from second to second and the AI does not appear"}, {"start": 64.52, "end": 67.92, "text": " to remember what it did just a moment ago."}, {"start": 67.92, "end": 72.96, "text": " This kind of consistency was solved surprisingly well in a follow-up paper from a video in"}, {"start": 72.96, "end": 78.75999999999999, "text": " which an AI would look at the footage of a video game, for instance, Pac-Man for approximately"}, {"start": 78.76, "end": 85.0, "text": " 120 hours, we could shut down the video game and the AI would understand the rules so"}, {"start": 85.0, "end": 89.36, "text": " well that it could recreate the game that we could even play with."}, {"start": 89.36, "end": 95.28, "text": " It had memory and used it well and therefore it could enforce a notion of world consistency"}, {"start": 95.28, "end": 100.28, "text": " or, in other words, if we return to a state of the game that we visited before, it will"}, {"start": 100.28, "end": 103.96000000000001, "text": " remember to present us with very similar information."}, {"start": 103.96, "end": 109.47999999999999, "text": " So the question naturally arises would it be possible to create a photorealistic video"}, {"start": 109.47999999999999, "end": 113.36, "text": " from the segmentation maps that is also consistent?"}, {"start": 113.36, "end": 118.52, "text": " And in today's paper, researchers at MVIDIA propose a new technique that requests some"}, {"start": 118.52, "end": 123.88, "text": " additional information, for instance, a depth map that provides a little more information"}, {"start": 123.88, "end": 128.51999999999998, "text": " on how far different parts of the image are from the camera."}, {"start": 128.51999999999998, "end": 133.68, "text": " Much like the Pac-Man paper, this also has memory and I wonder if it is able to use it"}, {"start": 133.68, "end": 136.08, "text": " as well as that one did."}, {"start": 136.08, "end": 137.52, "text": " Let's test it out."}, {"start": 137.52, "end": 143.32, "text": " This previous work is currently looking at a man with a red shirt, with slowly look away,"}, {"start": 143.32, "end": 149.20000000000002, "text": " this regard the warping and when we go back, hey, do you see what I see here?"}, {"start": 149.20000000000002, "end": 150.96, "text": " The shirt became white."}, {"start": 150.96, "end": 154.84, "text": " This is not because the person is one of those artists who can change their clothes in"}, {"start": 154.84, "end": 160.24, "text": " less than a second, but because this older technique did not have a consistent internal"}, {"start": 160.24, "end": 162.16, "text": " model of the world."}, {"start": 162.16, "end": 164.52, "text": " Now, let's see the new one."}, {"start": 164.52, "end": 172.07999999999998, "text": " Once again, we start with the red shirt, look away, and then, yes, same red to blue gradient."}, {"start": 172.07999999999998, "end": 173.07999999999998, "text": " Excellent."}, {"start": 173.07999999999998, "end": 179.12, "text": " So it appears that this new technique also reuses information from previous frames efficiently."}, {"start": 179.12, "end": 184.72, "text": " It is finally able to create a consistent video with much less morphing and warping and"}, {"start": 184.72, "end": 185.96, "text": " even better."}, {"start": 185.96, "end": 190.28, "text": " We have these advantages, consistency property, where if you look at something that we"}, {"start": 190.28, "end": 194.56, "text": " looked at before, we will see very similar information there."}, {"start": 194.56, "end": 196.0, "text": " But there is more."}, {"start": 196.0, "end": 200.72, "text": " Additionally, it can also generate scenes from new viewpoints, which we also refer to as"}, {"start": 200.72, "end": 202.12, "text": " neural rendering."}, {"start": 202.12, "end": 207.4, "text": " And as you see, the two viewpoints show similar objects so the consistency property holds"}, {"start": 207.4, "end": 208.72, "text": " here too."}, {"start": 208.72, "end": 213.8, "text": " And now, hold onto your papers because we do not necessarily have to produce these semantic"}, {"start": 213.8, "end": 215.32, "text": " maps ourselves."}, {"start": 215.32, "end": 220.76, "text": " We can let the machines do all the work by firing up a video game that we like, request that"}, {"start": 220.76, "end": 226.88, "text": " the different object classes are colored differently and get this input for free."}, {"start": 226.88, "end": 232.0, "text": " And then, the technique generated a photorealistic video from the game graphics."}, {"start": 232.0, "end": 233.88, "text": " Absolutely amazing."}, {"start": 233.88, "end": 235.84, "text": " Now note that it is not perfect."}, {"start": 235.84, "end": 240.92, "text": " For instance, it has a different notion of time as the clouds are changing in the background"}, {"start": 240.92, "end": 244.16, "text": " rapidly."}, {"start": 244.16, "end": 249.92, "text": " And look, at the end of the sequence, we get back to our starting point and the first frame"}, {"start": 249.92, "end": 253.35999999999999, "text": " that we saw is very similar to the last one."}, {"start": 253.35999999999999, "end": 255.48, "text": " The consistency works here too."}, {"start": 255.48, "end": 256.71999999999997, "text": " Very good."}, {"start": 256.71999999999997, "end": 261.36, "text": " I have no doubt that two more papers down the line and this will be even better."}, {"start": 261.36, "end": 267.8, "text": " And for now, we can create consistent photorealistic videos even if all we have is freely obtained"}, {"start": 267.8, "end": 269.52, "text": " video game data."}, {"start": 269.52, "end": 271.44, "text": " What a time to be alive."}, {"start": 271.44, "end": 276.48, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 276.48, "end": 278.96, "text": " check out Lambda GPU Cloud."}, {"start": 278.96, "end": 283.76, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 283.76, "end": 287.36, "text": " that they are offering GPU cloud services as well."}, {"start": 287.36, "end": 294.72, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 294.72, "end": 299.72, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 299.72, "end": 305.40000000000003, "text": " And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 305.40000000000003, "end": 307.96000000000004, "text": " AWS and Azure."}, {"start": 307.96000000000004, "end": 313.6, "text": " Make sure to go to lambdalabs.com slash papers and sign up for one of their amazing GPU"}, {"start": 313.6, "end": 315.08000000000004, "text": " instances today."}, {"start": 315.08000000000004, "end": 318.68, "text": " Our thanks to Lambda for helping us make better videos for you."}, {"start": 318.68, "end": 330.16, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=fE9BqmJrrW0
Can We Simulate Tearing Meat? 🥩
❤️ Check out Snap's Residency Program and apply here: https://lensstudio.snapchat.com/snap-ar-creator-residency-program/?utm_source=twominutepapers&utm_medium=video&utm_campaign=tmp_ml_residency ❤️ Try Snap's Lens Studio here: https://lensstudio.snapchat.com/ 🎬Our Instagram page with the slow-motion videos is available here: https://www.instagram.com/twominutepapers/ 📝 The paper "AnisoMPM: Animating Anisotropic Damage Mechanics" is available here: https://joshuahwolper.com/anisompm ❗Erratum: At 4:17, I should have written "Anisotropic damage (new method)". Apologies! 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Károly Zsolnai-Fehér, perhaps the best part of being a computer graphics researcher, is creating virtual worlds on a daily basis and computing beautiful simulations in these worlds. And what you see here is not one, but two kinds of simulations. One is a physics simulation that computes how these objects move and a light transport simulation that computes how these objects look. In this video, we will strictly talk about the physics simulation part of what you see on the screen. To simulate this beautiful phenomena, many recent methods build on top of a technique called the material point method. This is a hybrid simulation technique that uses both particles and grids to create these beautiful animations, however, when used by itself, we can come up with a bunch of cases that it cannot simulate properly. One such example is cracking and tearing phenomena which has been addressed in this great paper that we discussed earlier this year. With this method, we could smash Oreos, candy crabs, pumpkins, and much, much more. It even supported tearing this bread apart. This already looks quite convincing, and this series, I always say, two more papers down the line and it will be improved significantly. Today, we are going to have a two-minute papers moment of truth because this is a follow-up work from Joshua Waupper, the first author of the previous bread paper, and you can immediately start holding onto your papers because this work is one of the finest I have seen as of late. With this, we can enrich our simulations with anisotropic damage and elasticity. So what does that mean exactly? This means that it supports more extreme topological changes in these virtual objects. This leads to better material separation when the damage happens. For instance, if you look here on the right, this was done by a previous method. For the first site, it looks good, there is some bouncy behavior here, but the separation line is a little too clean. Now, let's have a look at the new method. Woohoo! Now, that's what I'm talking about. Let's have another look. I hope you now see what I meant by the previous separation line being a little too clean. Remarkably, it also supports changing a few intuitive parameters like etha, the crack propagation speed, which we can use to further tailor the simulation to our liking. Artists are going to love this. We can also play with the young modulus, which describes the material's resistance against fractures. On the left, it is quite low and makes the material tear apart easily much like a sponge. As we increase it a bit, we get a stiffer material, which gives us this glorious, floppy behavior. Let's increase it even more and see what happens. Yes, it is more resistant against damage, however, in return, it gives us some more vibrations after the break. It is not only realistic, but it also gives us the choice with these parameters to tailor our simulation results to our liking. Absolutely incredible. Now then, if you have been holding onto your paper so far, now squeeze that paper because previous methods were only capable of tearing off a small piece or only a strip of this virtual pork, let's see what this new work will do. Yes, it can also simulate peeling off an entire layer. Glorious. But it's not the only thing we can peel. It can also deal with small pieces of this mozzarella cheese. I must admit that I have never done this myself, so this will be the official piece of homework for me and for the more curious minds out there after watching this video. Let me know in the comments if it went the same way in your kitchen as it did in the simulation here. You get extra credit if you pose the picture too. And finally, if we tear this piece of meat apart, you see that it takes into consideration the location of the fibers and the tearing takes place not in an arbitrary way, but much like in reality, it tears along the muscle fibers. So, how fast is it? We still have to wait a few seconds for each frame in these simulations. None of them took too long. There is a fish-tearing experiment in the paper that went very quickly. Half a second for each frame is a great deal. The pork experiment took nearly 40 seconds for each frame and the most demanding experiments involved a lance and bones. Frankly, they were a little too horrific to be included here, even for virtual bodies, but if you wish to have a look, make sure to click the paper in the video description. But wait, are you seeing what I am seeing? These examples took more than a thousand times longer to compute. Goodness, how can that be? Look, as you see here, in these cases, the Delta T step is extremely tiny, which means that we have to advance the simulation with tiny, tiny time steps that takes much longer to compute. How tiny? Quite. In this case, we have to advance the simulation one millionth of a second at a time. The reason for this is that bones have an extremely high stiffness, which makes this method much less efficient. And of course, you know the drill two more papers down the line and this may run interactively on a consumer machine at home. So, what's the verdict? A algorithm design A plus, exposition A plus, quality of presentation A double plus, and it's still Mr. Walper's third paper in computer graphics. Unreal. And we researchers even get paid to create beautiful works like this. I also couldn't resist creating a slow motion version of some of these videos, so if this is something that you wish to see, make sure to visit our Instagram page in the video description for more. This episode has been supported by Snap Inc. What you see here is Snap ML, a framework that helps you bring your own machine learning models to Snapchats AR lenses. You can build augmented reality experiences for Snapchats hundreds of millions of users and help them see the world through a different lens. You can also apply to Snap's AR creator residency program with a proposal of how you would use Lens Studio for a creative project. If selected, you could receive a grant between 1 to $5,000 and work with Snap's technical and creative teams to bring your ideas to life. It doesn't get any better than that. Make sure to go to the link in the video description and apply for their residency program and try Snap ML today. Our thanks to Snap Inc for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.96, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. K\u00e1roly Zsolnai-Feh\u00e9r, perhaps the"}, {"start": 4.96, "end": 10.540000000000001, "text": " best part of being a computer graphics researcher, is creating virtual worlds on a daily basis"}, {"start": 10.540000000000001, "end": 13.84, "text": " and computing beautiful simulations in these worlds."}, {"start": 13.84, "end": 18.28, "text": " And what you see here is not one, but two kinds of simulations."}, {"start": 18.28, "end": 23.52, "text": " One is a physics simulation that computes how these objects move and a light transport"}, {"start": 23.52, "end": 27.04, "text": " simulation that computes how these objects look."}, {"start": 27.04, "end": 31.439999999999998, "text": " In this video, we will strictly talk about the physics simulation part of what you see"}, {"start": 31.439999999999998, "end": 32.56, "text": " on the screen."}, {"start": 32.56, "end": 37.44, "text": " To simulate this beautiful phenomena, many recent methods build on top of a technique"}, {"start": 37.44, "end": 39.76, "text": " called the material point method."}, {"start": 39.76, "end": 44.44, "text": " This is a hybrid simulation technique that uses both particles and grids to create these"}, {"start": 44.44, "end": 50.56, "text": " beautiful animations, however, when used by itself, we can come up with a bunch of cases"}, {"start": 50.56, "end": 52.76, "text": " that it cannot simulate properly."}, {"start": 52.76, "end": 57.76, "text": " One such example is cracking and tearing phenomena which has been addressed in this great paper"}, {"start": 57.76, "end": 60.0, "text": " that we discussed earlier this year."}, {"start": 60.0, "end": 66.2, "text": " With this method, we could smash Oreos, candy crabs, pumpkins, and much, much more."}, {"start": 66.2, "end": 69.47999999999999, "text": " It even supported tearing this bread apart."}, {"start": 69.47999999999999, "end": 74.6, "text": " This already looks quite convincing, and this series, I always say, two more papers down"}, {"start": 74.6, "end": 77.28, "text": " the line and it will be improved significantly."}, {"start": 77.28, "end": 82.72, "text": " Today, we are going to have a two-minute papers moment of truth because this is a follow-up"}, {"start": 82.72, "end": 88.32, "text": " work from Joshua Waupper, the first author of the previous bread paper, and you can immediately"}, {"start": 88.32, "end": 93.4, "text": " start holding onto your papers because this work is one of the finest I have seen as"}, {"start": 93.4, "end": 94.56, "text": " of late."}, {"start": 94.56, "end": 100.84, "text": " With this, we can enrich our simulations with anisotropic damage and elasticity."}, {"start": 100.84, "end": 103.12, "text": " So what does that mean exactly?"}, {"start": 103.12, "end": 108.52, "text": " This means that it supports more extreme topological changes in these virtual objects."}, {"start": 108.52, "end": 112.6, "text": " This leads to better material separation when the damage happens."}, {"start": 112.6, "end": 117.16, "text": " For instance, if you look here on the right, this was done by a previous method."}, {"start": 117.16, "end": 122.03999999999999, "text": " For the first site, it looks good, there is some bouncy behavior here, but the separation"}, {"start": 122.03999999999999, "end": 124.75999999999999, "text": " line is a little too clean."}, {"start": 124.75999999999999, "end": 127.91999999999999, "text": " Now, let's have a look at the new method."}, {"start": 127.91999999999999, "end": 129.24, "text": " Woohoo!"}, {"start": 129.24, "end": 133.68, "text": " Now, that's what I'm talking about."}, {"start": 133.68, "end": 135.16, "text": " Let's have another look."}, {"start": 135.16, "end": 140.88, "text": " I hope you now see what I meant by the previous separation line being a little too clean."}, {"start": 140.88, "end": 146.35999999999999, "text": " Remarkably, it also supports changing a few intuitive parameters like etha, the crack"}, {"start": 146.35999999999999, "end": 152.2, "text": " propagation speed, which we can use to further tailor the simulation to our liking."}, {"start": 152.2, "end": 154.35999999999999, "text": " Artists are going to love this."}, {"start": 154.35999999999999, "end": 158.51999999999998, "text": " We can also play with the young modulus, which describes the material's resistance"}, {"start": 158.51999999999998, "end": 159.88, "text": " against fractures."}, {"start": 159.88, "end": 166.51999999999998, "text": " On the left, it is quite low and makes the material tear apart easily much like a sponge."}, {"start": 166.52, "end": 173.16000000000003, "text": " As we increase it a bit, we get a stiffer material, which gives us this glorious, floppy behavior."}, {"start": 173.16000000000003, "end": 177.44, "text": " Let's increase it even more and see what happens."}, {"start": 177.44, "end": 183.64000000000001, "text": " Yes, it is more resistant against damage, however, in return, it gives us some more vibrations"}, {"start": 183.64000000000001, "end": 184.64000000000001, "text": " after the break."}, {"start": 184.64000000000001, "end": 189.8, "text": " It is not only realistic, but it also gives us the choice with these parameters to tailor"}, {"start": 189.8, "end": 192.60000000000002, "text": " our simulation results to our liking."}, {"start": 192.60000000000002, "end": 194.44, "text": " Absolutely incredible."}, {"start": 194.44, "end": 200.0, "text": " Now then, if you have been holding onto your paper so far, now squeeze that paper because"}, {"start": 200.0, "end": 206.4, "text": " previous methods were only capable of tearing off a small piece or only a strip of this"}, {"start": 206.4, "end": 211.28, "text": " virtual pork, let's see what this new work will do."}, {"start": 211.28, "end": 217.36, "text": " Yes, it can also simulate peeling off an entire layer."}, {"start": 217.36, "end": 218.36, "text": " Glorious."}, {"start": 218.36, "end": 220.72, "text": " But it's not the only thing we can peel."}, {"start": 220.72, "end": 224.2, "text": " It can also deal with small pieces of this mozzarella cheese."}, {"start": 224.2, "end": 229.28, "text": " I must admit that I have never done this myself, so this will be the official piece of homework"}, {"start": 229.28, "end": 233.95999999999998, "text": " for me and for the more curious minds out there after watching this video."}, {"start": 233.95999999999998, "end": 238.72, "text": " Let me know in the comments if it went the same way in your kitchen as it did in the simulation"}, {"start": 238.72, "end": 239.72, "text": " here."}, {"start": 239.72, "end": 242.67999999999998, "text": " You get extra credit if you pose the picture too."}, {"start": 242.67999999999998, "end": 247.76, "text": " And finally, if we tear this piece of meat apart, you see that it takes into consideration"}, {"start": 247.76, "end": 253.44, "text": " the location of the fibers and the tearing takes place not in an arbitrary way, but much"}, {"start": 253.44, "end": 257.56, "text": " like in reality, it tears along the muscle fibers."}, {"start": 257.56, "end": 260.0, "text": " So, how fast is it?"}, {"start": 260.0, "end": 264.16, "text": " We still have to wait a few seconds for each frame in these simulations."}, {"start": 264.16, "end": 265.8, "text": " None of them took too long."}, {"start": 265.8, "end": 269.8, "text": " There is a fish-tearing experiment in the paper that went very quickly."}, {"start": 269.8, "end": 272.52, "text": " Half a second for each frame is a great deal."}, {"start": 272.52, "end": 277.92, "text": " The pork experiment took nearly 40 seconds for each frame and the most demanding experiments"}, {"start": 277.92, "end": 280.52, "text": " involved a lance and bones."}, {"start": 280.52, "end": 285.88, "text": " Frankly, they were a little too horrific to be included here, even for virtual bodies,"}, {"start": 285.88, "end": 290.52, "text": " but if you wish to have a look, make sure to click the paper in the video description."}, {"start": 290.52, "end": 294.08, "text": " But wait, are you seeing what I am seeing?"}, {"start": 294.08, "end": 299.12, "text": " These examples took more than a thousand times longer to compute."}, {"start": 299.12, "end": 301.52, "text": " Goodness, how can that be?"}, {"start": 301.52, "end": 307.79999999999995, "text": " Look, as you see here, in these cases, the Delta T step is extremely tiny, which means"}, {"start": 307.8, "end": 313.12, "text": " that we have to advance the simulation with tiny, tiny time steps that takes much longer"}, {"start": 313.12, "end": 314.56, "text": " to compute."}, {"start": 314.56, "end": 315.56, "text": " How tiny?"}, {"start": 315.56, "end": 316.56, "text": " Quite."}, {"start": 316.56, "end": 321.88, "text": " In this case, we have to advance the simulation one millionth of a second at a time."}, {"start": 321.88, "end": 326.48, "text": " The reason for this is that bones have an extremely high stiffness, which makes this method"}, {"start": 326.48, "end": 328.12, "text": " much less efficient."}, {"start": 328.12, "end": 333.2, "text": " And of course, you know the drill two more papers down the line and this may run interactively"}, {"start": 333.2, "end": 335.44, "text": " on a consumer machine at home."}, {"start": 335.44, "end": 337.48, "text": " So, what's the verdict?"}, {"start": 337.48, "end": 345.76, "text": " A algorithm design A plus, exposition A plus, quality of presentation A double plus,"}, {"start": 345.76, "end": 350.32, "text": " and it's still Mr. Walper's third paper in computer graphics."}, {"start": 350.32, "end": 351.32, "text": " Unreal."}, {"start": 351.32, "end": 356.20000000000005, "text": " And we researchers even get paid to create beautiful works like this."}, {"start": 356.20000000000005, "end": 360.96000000000004, "text": " I also couldn't resist creating a slow motion version of some of these videos, so if this"}, {"start": 360.96000000000004, "end": 365.12, "text": " is something that you wish to see, make sure to visit our Instagram page in the video"}, {"start": 365.12, "end": 366.92, "text": " description for more."}, {"start": 366.92, "end": 369.76, "text": " This episode has been supported by Snap Inc."}, {"start": 369.76, "end": 374.76, "text": " What you see here is Snap ML, a framework that helps you bring your own machine learning"}, {"start": 374.76, "end": 377.52000000000004, "text": " models to Snapchats AR lenses."}, {"start": 377.52000000000004, "end": 382.6, "text": " You can build augmented reality experiences for Snapchats hundreds of millions of users"}, {"start": 382.6, "end": 385.44, "text": " and help them see the world through a different lens."}, {"start": 385.44, "end": 390.8, "text": " You can also apply to Snap's AR creator residency program with a proposal of how you would"}, {"start": 390.8, "end": 393.76, "text": " use Lens Studio for a creative project."}, {"start": 393.76, "end": 400.71999999999997, "text": " If selected, you could receive a grant between 1 to $5,000 and work with Snap's technical"}, {"start": 400.71999999999997, "end": 403.88, "text": " and creative teams to bring your ideas to life."}, {"start": 403.88, "end": 406.2, "text": " It doesn't get any better than that."}, {"start": 406.2, "end": 410.76, "text": " Make sure to go to the link in the video description and apply for their residency program and"}, {"start": 410.76, "end": 412.92, "text": " try Snap ML today."}, {"start": 412.92, "end": 416.68, "text": " Our thanks to Snap Inc for helping us make better videos for you."}, {"start": 416.68, "end": 426.68, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_x9AwxfjxvE
OpenAI GPT-3 - Good At Almost Everything! 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their instrumentation of a previous OpenAI paper is available here: https://app.wandb.ai/authors/openai-jukebox/reports/Experiments-with-OpenAI-Jukebox--VmlldzoxMzQwODg 📝 The paper "Language Models are Few-Shot Learners" is available here: - https://arxiv.org/abs/2005.14165 - https://openai.com/blog/openai-api/ Credits follow for the tweets of the applications. Follow their authors if you wish to see more! Website layout: https://twitter.com/sharifshameem/status/1283322990625607681 Plots: https://twitter.com/aquariusacquah/status/1285415144017797126?s=12 Typesetting math: https://twitter.com/sh_reya/status/1284746918959239168 Population data: https://twitter.com/pavtalk/status/1285410751092416513 Legalese: https://twitter.com/f_j_j_/status/1283848393832333313 Nutrition labels: https://twitter.com/lawderpaul/status/1284972517749338112 User interface design: https://twitter.com/jsngr/status/1284511080715362304 More cool applications: Generating machine learning models - https://twitter.com/mattshumer_/status/1287125015528341506?s=12 Creating animations - https://twitter.com/ak92501/status/1284553300940066818 Command line magic - https://twitter.com/super3/status/1284567835386294273?s=12 Analogies: https://twitter.com/melmitchell1/status/1291170016130412544?s=12 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #GPT3 #GPT2
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. In early 2019, a learning-based technique appeared that could perform common natural language processing operations, for instance, answering questions, completing text, reading comprehension, summarization, and more. This method was developed by scientists at OpenAI and they called it GPT2. The goal was to be able to perform this task with as little supervision as possible. This means that they unleashed this algorithm to read the internet and the question was, what would the AI learn during this process? That is a tricky question. And to be able to answer it, have a look at this paper from 2017, where an AI was given a bunch of Amazon product reviews and the goal was to teach it to be able to generate new ones or continue a review when given one. Then, something unexpected happened. The finished neural network used surprisingly few neurons to be able to continue these reviews. They noticed that the neural network has built up a knowledge of not only language, but also built a sentiment detector as well. This means that the AI recognized that in order to be able to continue a review, it not only needs to learn English, but also needs to be able to detect whether the review seems positive or not. If we know that we have to complete a review that seems positive from a small snippet, we have a much easier time doing it well. And now, back to GPT2, as it was asked to predict the next character in sentences of not reviews, but of any kind, we asked what this neural network would learn. Well, now we know that of course, it learns whatever it needs to learn to perform the sentence completion properly. And to do this, it needs to learn English by itself, and that's exactly what it did. It also learned about a lot of topics to be able to discuss them well. What topics? Let's see. We gave it a try and I was somewhat surprised when I saw that it was able to continue a two-minute paper script even though it seems to have turned into a history lesson. What was even more surprising is that it could shoulder the two-minute papers test, or, in other words, I asked it to talk about the nature of fluid simulations and it was caught cheating red-handed. But then, it continued in a way that was not only coherent, but had quite a bit of truth to it. Note that there was no explicit instruction for the AI apart from it being unleashed on the internet and reading it. And now, the next version appeared by the name GPT3. This version is now more than 100 times bigger, so our first question is how much better can an AI get if we increase the size of a neural network? Let's have a look together. These are the results on a challenging reading comprehension test as a function of the number of parameters. As you see, around 1.5 billion parameters, which is roughly equivalent to GPT2, it has learned a great deal, but its understanding is nowhere near the level of human comprehension. However, as we grow the network, something incredible happens. Non-trivial capabilities start to appear as we approach the 100 billion parameters. Look, it nearly matched the level of humans. My goodness! This was possible before, but only with neural networks that are specifically designed for a narrow test. In comparison, GPT3 is much more general. Let's test that generality and have a look at 5 practical applications together. One, open AI made this AI accessible to a lucky few people, and it turns out it has read a lot of things on the internet which contains a lot of code, so it can generate website layouts from a written description. Two, it also learned how to generate properly formatted plots from a tiny prompt written in plain English, not just one kind, many kinds. Perhaps to the joy of technical PhD students around the world, three, it can properly pipe set mathematical equations from a plain English description as well. Four, it understands the kind of data we have in a spreadsheet, in this case population and feels the missing parts correctly. And five, it can also translate a complex legal text into plain language, or the other way around, in other words, it can also generate legal text from our simple descriptions. And as you see here, it can do much, much more. I left a link to all of these materials in the video description. However, of course, this iteration of GPT also has its limitations. For instance, we haven't seen the extent to which these examples are cherry-picked, or, in other words, for every good output that we marvel at, there might have been one, or a dozen tries that did not come out well. We don't exactly know. But the main point is that working with GPT3 is a really peculiar process where we know that a vast body of knowledge lies within, but it only emerges if we can bring it out with properly written prompts. It almost feels like a new kind of programming that is open to everyone, even people without any programming, or technical knowledge. If a computer is a bicycle for the mind, then GPT3 is a fighter jet. Absolutely incredible. And to say that the paper is vast would be an understatement, we only scratch the surface of what it can do here, so make sure to have a look if you wish to know more about it. The link is available in the video description. I can only imagine what we will be able to do with GPT4 and GPT5 in the near future. What a time to be alive. What you see here is an instrumentation for a previous paper that we covered in this series, which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers, or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 10.28, "text": " In early 2019, a learning-based technique appeared that could perform common natural language"}, {"start": 10.28, "end": 17.240000000000002, "text": " processing operations, for instance, answering questions, completing text, reading comprehension,"}, {"start": 17.240000000000002, "end": 19.48, "text": " summarization, and more."}, {"start": 19.48, "end": 25.36, "text": " This method was developed by scientists at OpenAI and they called it GPT2."}, {"start": 25.36, "end": 30.84, "text": " The goal was to be able to perform this task with as little supervision as possible."}, {"start": 30.84, "end": 35.92, "text": " This means that they unleashed this algorithm to read the internet and the question was,"}, {"start": 35.92, "end": 39.2, "text": " what would the AI learn during this process?"}, {"start": 39.2, "end": 41.12, "text": " That is a tricky question."}, {"start": 41.12, "end": 46.68, "text": " And to be able to answer it, have a look at this paper from 2017, where an AI was given"}, {"start": 46.68, "end": 52.04, "text": " a bunch of Amazon product reviews and the goal was to teach it to be able to generate new"}, {"start": 52.04, "end": 55.839999999999996, "text": " ones or continue a review when given one."}, {"start": 55.839999999999996, "end": 58.68, "text": " Then, something unexpected happened."}, {"start": 58.68, "end": 64.16, "text": " The finished neural network used surprisingly few neurons to be able to continue these reviews."}, {"start": 64.16, "end": 69.88, "text": " They noticed that the neural network has built up a knowledge of not only language, but"}, {"start": 69.88, "end": 73.08, "text": " also built a sentiment detector as well."}, {"start": 73.08, "end": 77.75999999999999, "text": " This means that the AI recognized that in order to be able to continue a review, it not"}, {"start": 77.76, "end": 83.24000000000001, "text": " only needs to learn English, but also needs to be able to detect whether the review seems"}, {"start": 83.24000000000001, "end": 85.52000000000001, "text": " positive or not."}, {"start": 85.52000000000001, "end": 90.08000000000001, "text": " If we know that we have to complete a review that seems positive from a small snippet,"}, {"start": 90.08000000000001, "end": 93.32000000000001, "text": " we have a much easier time doing it well."}, {"start": 93.32000000000001, "end": 99.92, "text": " And now, back to GPT2, as it was asked to predict the next character in sentences of"}, {"start": 99.92, "end": 105.24000000000001, "text": " not reviews, but of any kind, we asked what this neural network would learn."}, {"start": 105.24, "end": 110.8, "text": " Well, now we know that of course, it learns whatever it needs to learn to perform the sentence"}, {"start": 110.8, "end": 112.75999999999999, "text": " completion properly."}, {"start": 112.75999999999999, "end": 118.28, "text": " And to do this, it needs to learn English by itself, and that's exactly what it did."}, {"start": 118.28, "end": 122.36, "text": " It also learned about a lot of topics to be able to discuss them well."}, {"start": 122.36, "end": 123.36, "text": " What topics?"}, {"start": 123.36, "end": 124.56, "text": " Let's see."}, {"start": 124.56, "end": 129.56, "text": " We gave it a try and I was somewhat surprised when I saw that it was able to continue a"}, {"start": 129.56, "end": 136.16, "text": " two-minute paper script even though it seems to have turned into a history lesson."}, {"start": 136.16, "end": 141.6, "text": " What was even more surprising is that it could shoulder the two-minute papers test, or,"}, {"start": 141.6, "end": 146.76, "text": " in other words, I asked it to talk about the nature of fluid simulations and it was caught"}, {"start": 146.76, "end": 150.04, "text": " cheating red-handed."}, {"start": 150.04, "end": 155.4, "text": " But then, it continued in a way that was not only coherent, but had quite a bit of truth"}, {"start": 155.4, "end": 156.4, "text": " to it."}, {"start": 156.4, "end": 161.84, "text": " Note that there was no explicit instruction for the AI apart from it being unleashed on"}, {"start": 161.84, "end": 164.32, "text": " the internet and reading it."}, {"start": 164.32, "end": 168.48000000000002, "text": " And now, the next version appeared by the name GPT3."}, {"start": 168.48000000000002, "end": 174.36, "text": " This version is now more than 100 times bigger, so our first question is how much better"}, {"start": 174.36, "end": 179.04000000000002, "text": " can an AI get if we increase the size of a neural network?"}, {"start": 179.04000000000002, "end": 180.44, "text": " Let's have a look together."}, {"start": 180.44, "end": 185.28, "text": " These are the results on a challenging reading comprehension test as a function of the number"}, {"start": 185.28, "end": 186.64000000000001, "text": " of parameters."}, {"start": 186.64000000000001, "end": 192.76, "text": " As you see, around 1.5 billion parameters, which is roughly equivalent to GPT2, it has learned"}, {"start": 192.76, "end": 199.04, "text": " a great deal, but its understanding is nowhere near the level of human comprehension."}, {"start": 199.04, "end": 204.08, "text": " However, as we grow the network, something incredible happens."}, {"start": 204.08, "end": 209.08, "text": " Non-trivial capabilities start to appear as we approach the 100 billion parameters."}, {"start": 209.08, "end": 213.32, "text": " Look, it nearly matched the level of humans."}, {"start": 213.32, "end": 214.8, "text": " My goodness!"}, {"start": 214.8, "end": 220.16000000000003, "text": " This was possible before, but only with neural networks that are specifically designed for"}, {"start": 220.16000000000003, "end": 221.60000000000002, "text": " a narrow test."}, {"start": 221.60000000000002, "end": 225.16000000000003, "text": " In comparison, GPT3 is much more general."}, {"start": 225.16000000000003, "end": 231.24, "text": " Let's test that generality and have a look at 5 practical applications together."}, {"start": 231.24, "end": 237.32000000000002, "text": " One, open AI made this AI accessible to a lucky few people, and it turns out it has read"}, {"start": 237.32000000000002, "end": 243.88000000000002, "text": " a lot of things on the internet which contains a lot of code, so it can generate website layouts"}, {"start": 243.88, "end": 246.12, "text": " from a written description."}, {"start": 246.12, "end": 253.28, "text": " Two, it also learned how to generate properly formatted plots from a tiny prompt written"}, {"start": 253.28, "end": 258.24, "text": " in plain English, not just one kind, many kinds."}, {"start": 258.24, "end": 263.28, "text": " Perhaps to the joy of technical PhD students around the world, three, it can properly"}, {"start": 263.28, "end": 269.12, "text": " pipe set mathematical equations from a plain English description as well."}, {"start": 269.12, "end": 276.64, "text": " Four, it understands the kind of data we have in a spreadsheet, in this case population"}, {"start": 276.64, "end": 285.24, "text": " and feels the missing parts correctly."}, {"start": 285.24, "end": 291.6, "text": " And five, it can also translate a complex legal text into plain language, or the other way"}, {"start": 291.6, "end": 296.92, "text": " around, in other words, it can also generate legal text from our simple descriptions."}, {"start": 296.92, "end": 304.76, "text": " And as you see here, it can do much, much more."}, {"start": 304.76, "end": 308.24, "text": " I left a link to all of these materials in the video description."}, {"start": 308.24, "end": 313.40000000000003, "text": " However, of course, this iteration of GPT also has its limitations."}, {"start": 313.40000000000003, "end": 318.52000000000004, "text": " For instance, we haven't seen the extent to which these examples are cherry-picked, or,"}, {"start": 318.52000000000004, "end": 323.84000000000003, "text": " in other words, for every good output that we marvel at, there might have been one, or"}, {"start": 323.84000000000003, "end": 326.48, "text": " a dozen tries that did not come out well."}, {"start": 326.48, "end": 328.36, "text": " We don't exactly know."}, {"start": 328.36, "end": 333.56, "text": " But the main point is that working with GPT3 is a really peculiar process where we know"}, {"start": 333.56, "end": 339.56, "text": " that a vast body of knowledge lies within, but it only emerges if we can bring it out"}, {"start": 339.56, "end": 341.64000000000004, "text": " with properly written prompts."}, {"start": 341.64000000000004, "end": 347.28000000000003, "text": " It almost feels like a new kind of programming that is open to everyone, even people without"}, {"start": 347.28000000000003, "end": 350.04, "text": " any programming, or technical knowledge."}, {"start": 350.04, "end": 355.48, "text": " If a computer is a bicycle for the mind, then GPT3 is a fighter jet."}, {"start": 355.48, "end": 357.0, "text": " Absolutely incredible."}, {"start": 357.0, "end": 361.64000000000004, "text": " And to say that the paper is vast would be an understatement, we only scratch the surface"}, {"start": 361.64000000000004, "end": 365.92, "text": " of what it can do here, so make sure to have a look if you wish to know more about it."}, {"start": 365.92, "end": 368.20000000000005, "text": " The link is available in the video description."}, {"start": 368.20000000000005, "end": 375.28000000000003, "text": " I can only imagine what we will be able to do with GPT4 and GPT5 in the near future."}, {"start": 375.28000000000003, "end": 377.0, "text": " What a time to be alive."}, {"start": 377.0, "end": 381.44, "text": " What you see here is an instrumentation for a previous paper that we covered in this"}, {"start": 381.44, "end": 384.56, "text": " series, which was made by weights and biases."}, {"start": 384.56, "end": 390.0, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 390.0, "end": 394.64, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 394.64, "end": 399.4, "text": " Their system is designed to save you a ton of time and money, and it is actively used"}, {"start": 399.4, "end": 406.12, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 406.12, "end": 411.08, "text": " And the best part is that if you have an open source, academic, or personal project,"}, {"start": 411.08, "end": 413.2, "text": " you can use their tools for free."}, {"start": 413.2, "end": 415.71999999999997, "text": " It really is as good as it gets."}, {"start": 415.71999999999997, "end": 421.84, "text": " Make sure to visit them through wnbe.com slash papers, or click the link in the video description"}, {"start": 421.84, "end": 425.15999999999997, "text": " to start tracking your experiments in 5 minutes."}, {"start": 425.15999999999997, "end": 429.76, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 429.76, "end": 431.15999999999997, "text": " better videos for you."}, {"start": 431.16, "end": 458.6, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=nkHL1GNU18M
Physics in 4 Dimensions…How?
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://www.wandb.com/articles/how-to-visualize-models-in-tensorboard-with-weights-and-biases 📝 The paper "N-Dimensional Rigid Body Dynamics" is available here: https://marctenbosch.com/ndphysics/ Check out these two 4D games here: 4D Toys: https://4dtoys.com/ Miegakure (still in the works): https://miegakure.com/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zonai-Fehir. I recommend that you immediately start to hold onto your papers because this work is about creating physics simulations in higher than three spatial dimensions. The research in this paper is used to create this game that takes place in a 4D world and it is just as beautiful as it is confusing. And, I'll be honest, the more we look, the less this world seems to make sense for us. So let's try to understand what is happening here together. You see an example of a 4D simulation here. The footage seems very cool, but I can't help but notice that sometimes things just seem to disappear into the ether. In other cases they suddenly appear from nowhere. Hmm, how can that happen? Let's decrease the dimensionality of the problem and suddenly we will easily understand how this is possible. Here we take a 2D slice of our 3D world, start the simulation and imagine that we only see what happens on this slice. We see nothing else just the slice. That sounds fine, things move around freely, nothing crazy going on, and suddenly, hey, things slowly disappear. The key in this example is that luckily we can also look at the corresponding 3D simulation on your display. You can see not only the 2D slice, but everything around it. So, the playbook is as follows. If something disappears, it means that it went back or forward in the 3rd dimension. So what we see on the slice is not the object itself, but its intersection with the 3D object. The smaller the intersection, the more we see the side of the sphere and the object appears to be smaller or even better. Look, the ball is even floating in the air. When we look at the 3D world, we see that this only appears like that, but in reality, what happens is that we see the side of the sphere. Really cool. We now understand that colliding can create a mirage that seems as if objects were suddenly receding, disappearing, and even floating. Just imagine how confusing this would be if we were this 2D character. Actually, you don't even need to imagine that because this piece of research work was made to be able to create computer games where we are a 3D character in a 4D world so we can experience all of these confusing phenomena. However, we get some help in decoding this situation because even though this example runs in 3D, when something disappears into the ether, we can move along the 4th dimension and find it. So good. Floating behavior can also happen and now we know exactly why. But wait, in this 3D domain, even more confusing, new things happen. As you see here, in 3D, cubes seem to be changing shape and the reason for this is the same, we just see a higher dimensional object's intersection with our lower dimensional space. But the paper does not only extend 3D rigid body dynamic simulations to not only 2D, but to any higher dimension. You see, this kind of footage can only be simulated because it also proposes a way to compute collisions, static and kinetic friction and similar physical forces in arbitrary dimensions. And now, hold onto your papers because this framework can also be used to make amazing mind-bending game puzzles where we can miraculously unbind seemingly impossible configurations by reaching into the 4th dimension. This is one of those crazy research works that truly cannot be bothered by recent trends and hot topics and I mean it in the best possible way. It creates its own little world and invites us to experience it and it truly is a site to behold. I can only imagine how one of these soft body or fluid simulations would look in higher dimensions and boy, would I love to see some follow-up papers perform something like this. If you wish to see more, make sure to have a look at the paper in the video description and I also put a link to one 4D game that is under development and one that you can buy and play today. Huge congratulations to Mark Tandbosch who got this single author paper accepted to see-graph perhaps the most prestigious computer graphics conference. Not only that but I am pretty sure that I have never seen a work from an indie game developer presented as a C-graph technical paper. Bravo! This episode has been supported by weights and biases. In this post they show you how to use their system with tensile board and visualize your results with it. You can even try an example in an interactive notebook through the link in the video description. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Zonai-Fehir."}, {"start": 4.46, "end": 9.76, "text": " I recommend that you immediately start to hold onto your papers because this work is"}, {"start": 9.76, "end": 15.08, "text": " about creating physics simulations in higher than three spatial dimensions."}, {"start": 15.08, "end": 20.48, "text": " The research in this paper is used to create this game that takes place in a 4D world and"}, {"start": 20.48, "end": 23.56, "text": " it is just as beautiful as it is confusing."}, {"start": 23.56, "end": 28.84, "text": " And, I'll be honest, the more we look, the less this world seems to make sense for us."}, {"start": 28.84, "end": 32.48, "text": " So let's try to understand what is happening here together."}, {"start": 32.48, "end": 35.980000000000004, "text": " You see an example of a 4D simulation here."}, {"start": 35.980000000000004, "end": 41.88, "text": " The footage seems very cool, but I can't help but notice that sometimes things just seem"}, {"start": 41.88, "end": 44.28, "text": " to disappear into the ether."}, {"start": 44.28, "end": 47.92, "text": " In other cases they suddenly appear from nowhere."}, {"start": 47.92, "end": 50.480000000000004, "text": " Hmm, how can that happen?"}, {"start": 50.480000000000004, "end": 55.400000000000006, "text": " Let's decrease the dimensionality of the problem and suddenly we will easily understand how"}, {"start": 55.400000000000006, "end": 56.96, "text": " this is possible."}, {"start": 56.96, "end": 63.24, "text": " Here we take a 2D slice of our 3D world, start the simulation and imagine that we only"}, {"start": 63.24, "end": 65.96000000000001, "text": " see what happens on this slice."}, {"start": 65.96000000000001, "end": 68.8, "text": " We see nothing else just the slice."}, {"start": 68.8, "end": 75.32, "text": " That sounds fine, things move around freely, nothing crazy going on, and suddenly, hey,"}, {"start": 75.32, "end": 77.68, "text": " things slowly disappear."}, {"start": 77.68, "end": 83.16, "text": " The key in this example is that luckily we can also look at the corresponding 3D simulation"}, {"start": 83.16, "end": 84.6, "text": " on your display."}, {"start": 84.6, "end": 89.11999999999999, "text": " You can see not only the 2D slice, but everything around it."}, {"start": 89.11999999999999, "end": 91.64, "text": " So, the playbook is as follows."}, {"start": 91.64, "end": 97.36, "text": " If something disappears, it means that it went back or forward in the 3rd dimension."}, {"start": 97.36, "end": 103.19999999999999, "text": " So what we see on the slice is not the object itself, but its intersection with the 3D"}, {"start": 103.19999999999999, "end": 104.19999999999999, "text": " object."}, {"start": 104.19999999999999, "end": 109.03999999999999, "text": " The smaller the intersection, the more we see the side of the sphere and the object appears"}, {"start": 109.03999999999999, "end": 112.91999999999999, "text": " to be smaller or even better."}, {"start": 112.92, "end": 117.6, "text": " Look, the ball is even floating in the air."}, {"start": 117.6, "end": 123.08, "text": " When we look at the 3D world, we see that this only appears like that, but in reality,"}, {"start": 123.08, "end": 126.52, "text": " what happens is that we see the side of the sphere."}, {"start": 126.52, "end": 127.76, "text": " Really cool."}, {"start": 127.76, "end": 133.68, "text": " We now understand that colliding can create a mirage that seems as if objects were suddenly"}, {"start": 133.68, "end": 138.08, "text": " receding, disappearing, and even floating."}, {"start": 138.08, "end": 142.48000000000002, "text": " Just imagine how confusing this would be if we were this 2D character."}, {"start": 142.48, "end": 146.92, "text": " Actually, you don't even need to imagine that because this piece of research work was"}, {"start": 146.92, "end": 153.12, "text": " made to be able to create computer games where we are a 3D character in a 4D world so"}, {"start": 153.12, "end": 156.23999999999998, "text": " we can experience all of these confusing phenomena."}, {"start": 156.23999999999998, "end": 161.56, "text": " However, we get some help in decoding this situation because even though this example runs"}, {"start": 161.56, "end": 167.72, "text": " in 3D, when something disappears into the ether, we can move along the 4th dimension and"}, {"start": 167.72, "end": 168.72, "text": " find it."}, {"start": 168.72, "end": 169.72, "text": " So good."}, {"start": 169.72, "end": 177.24, "text": " Floating behavior can also happen and now we know exactly why."}, {"start": 177.24, "end": 182.36, "text": " But wait, in this 3D domain, even more confusing, new things happen."}, {"start": 182.36, "end": 187.8, "text": " As you see here, in 3D, cubes seem to be changing shape and the reason for this is the"}, {"start": 187.8, "end": 194.48, "text": " same, we just see a higher dimensional object's intersection with our lower dimensional space."}, {"start": 194.48, "end": 201.48, "text": " But the paper does not only extend 3D rigid body dynamic simulations to not only 2D, but"}, {"start": 201.48, "end": 203.48, "text": " to any higher dimension."}, {"start": 203.48, "end": 209.79999999999998, "text": " You see, this kind of footage can only be simulated because it also proposes a way to compute collisions,"}, {"start": 209.79999999999998, "end": 215.39999999999998, "text": " static and kinetic friction and similar physical forces in arbitrary dimensions."}, {"start": 215.39999999999998, "end": 220.48, "text": " And now, hold onto your papers because this framework can also be used to make amazing"}, {"start": 220.48, "end": 227.28, "text": " mind-bending game puzzles where we can miraculously unbind seemingly impossible configurations"}, {"start": 227.28, "end": 230.07999999999998, "text": " by reaching into the 4th dimension."}, {"start": 230.07999999999998, "end": 235.0, "text": " This is one of those crazy research works that truly cannot be bothered by recent trends"}, {"start": 235.0, "end": 238.76, "text": " and hot topics and I mean it in the best possible way."}, {"start": 238.76, "end": 244.0, "text": " It creates its own little world and invites us to experience it and it truly is a site"}, {"start": 244.0, "end": 245.0, "text": " to behold."}, {"start": 245.0, "end": 249.48, "text": " I can only imagine how one of these soft body or fluid simulations would look in higher"}, {"start": 249.48, "end": 255.16, "text": " dimensions and boy, would I love to see some follow-up papers perform something like this."}, {"start": 255.16, "end": 259.03999999999996, "text": " If you wish to see more, make sure to have a look at the paper in the video description"}, {"start": 259.03999999999996, "end": 264.68, "text": " and I also put a link to one 4D game that is under development and one that you can buy"}, {"start": 264.68, "end": 265.96, "text": " and play today."}, {"start": 265.96, "end": 270.65999999999997, "text": " Huge congratulations to Mark Tandbosch who got this single author paper accepted to"}, {"start": 270.65999999999997, "end": 275.12, "text": " see-graph perhaps the most prestigious computer graphics conference."}, {"start": 275.12, "end": 280.04, "text": " Not only that but I am pretty sure that I have never seen a work from an indie game developer"}, {"start": 280.04, "end": 283.44, "text": " presented as a C-graph technical paper."}, {"start": 283.44, "end": 284.6, "text": " Bravo!"}, {"start": 284.6, "end": 287.68, "text": " This episode has been supported by weights and biases."}, {"start": 287.68, "end": 292.72, "text": " In this post they show you how to use their system with tensile board and visualize your"}, {"start": 292.72, "end": 293.88, "text": " results with it."}, {"start": 293.88, "end": 299.2, "text": " You can even try an example in an interactive notebook through the link in the video description."}, {"start": 299.2, "end": 303.8, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 303.8, "end": 308.6, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 308.6, "end": 315.36, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 315.36, "end": 320.28000000000003, "text": " And the best part is that if you have an open source, academic or personal project,"}, {"start": 320.28000000000003, "end": 322.40000000000003, "text": " you can use their tools for free."}, {"start": 322.40000000000003, "end": 324.92, "text": " It really is as good as it gets."}, {"start": 324.92, "end": 331.04, "text": " Make sure to visit them through wnb.com slash papers or click the link in the video description"}, {"start": 331.04, "end": 334.36, "text": " to start tracking your experiments in 5 minutes."}, {"start": 334.36, "end": 338.92, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 338.92, "end": 340.28000000000003, "text": " better videos for you."}, {"start": 340.28, "end": 369.84, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=pBkFAIUmWu0
These AI-Driven Characters Dribble Like Mad! 🏀
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://app.wandb.ai/cayush/pytorchlightning/reports/How-to-use-Pytorch-Lightning-with-Weights-%26-Biases--Vmlldzo2NjQ1Mw 📝 The paper "Local Motion Phases for Learning Multi-Contact Character Movements" is available here: https://github.com/sebastianstarke/AI4Animation 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Károly Zsolnai-Fehir in computer games and all kinds of applications where we are yearning for realistic animation, we somehow need to tell our computers how the different joints and body parts of these virtual characters are meant to move around over time. Since the human eye is very sensitive to even the tiniest inaccuracies, we typically don't program these motions by hand, but instead we often record a lot of real-time motion capture data in a studio and try to reuse that in our applications. Previous techniques have tackled quadruped control and we can even teach bipeds to interact with their environment in a realistic manner. Today we will have a look at an absolutely magnificent piece of work where the authors carved out a smaller subproblem and made a solution for it that is truly second to none. And this subproblem is simulating virtual characters playing basketball. Like with previous works, we are looking for realism in the movement and for games it is also a requirement that the character responds to our controls well. However, the key challenge is that all we are given is 3 hours of unstructured motion capture data. That is next to nothing and from this next to nothing, a learning algorithm has to learn to understand these motions so well that it can weave them together even when a specific movement combination is not present in this data. That is quite a challenge. Compared to many other works, this data is really not a lot so I am excited to see what value we are getting out of these 3 hours. At first I thought we would get only very rudimentary motions and boy was I dead wrong on that one. We have control over this character and can perform these elaborate maneuvers and it remains very responsive even if we mesh the controller like a madman producing these sharp turns. As you see it can handle these cases really well. And not only that but it is so well done we can even dribble through a set of obstacles leading to a responsive and enjoyable gameplay. Now about these dribbling behaviors, do we get only one boring motion or not? Not at all it was able to mine out not just one but many kinds of dribbling motions and is able to weave them into other moves as soon as we interact with the controller. This is already very convincing especially from just 3 hours of unstructured motion capture data. But this paper is just getting started. Now hold on to your papers because we can also shoot and catch the ball, move it around that is very surprising because it has looked at so little shooting data. Let's see yes less than 7 minutes. My goodness and it keeps going what I have found even more surprising is that it can handle unexpected movements which I find to be even more remarkable given the limited training data. These crazy corner cases are typically learnable when they are available in abundance in the training data which is not the case here. Amazing. When we compare these motions to a previous method we see that both the character and the ball's movement is much more lively. For instance here you can see that the face function neural network pfnn in short almost makes it seem like the ball has to stick to the hand of the player for an unhealthy amount of time to be able to create these motions. It doesn't happen at all with the new technique and remember this new method is also much more responsive to the player's controls and thus more enjoyable not only to look at but to play with. This is an aspect that is hard to measure but it is not to be underestimated in the general playing experience. Just imagine what this research area will be capable of not in a decade but just two more papers down the line. Loving it. Now at the start of the video I noted that the authors carved out a small use case which is training an AI to weave together basketball motion capture data in a manner that is both realistic and controllable. However, many times in research we look at a specialized problem and during that journey we learn general concepts that can be applied to other problems as well. That is exactly what happened here as you see parts of this technique can be generalized for quadruped control as well. Because good boy is pacing and running around beautifully. And you guessed right our favorite biped from the previous paper is also making an introduction. I am absolutely spellbound by this work and I hope that now you are too. Can't wait to see this implemented in newer games and other real time applications. What a time to be alive. This episode has been supported by weights and biases. In this post they show you how to use their system with PyTorch lightning and decouple your science code from your engineering code and visualize your models. You can even try an example in an interactive notebook through the link in the video description. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source academic or personal project you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.6000000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. K\u00e1roly Zsolnai-Fehir in computer games"}, {"start": 5.6000000000000005, "end": 10.72, "text": " and all kinds of applications where we are yearning for realistic animation, we somehow"}, {"start": 10.72, "end": 16.080000000000002, "text": " need to tell our computers how the different joints and body parts of these virtual characters"}, {"start": 16.080000000000002, "end": 18.64, "text": " are meant to move around over time."}, {"start": 18.64, "end": 24.240000000000002, "text": " Since the human eye is very sensitive to even the tiniest inaccuracies, we typically don't"}, {"start": 24.240000000000002, "end": 29.76, "text": " program these motions by hand, but instead we often record a lot of real-time motion capture"}, {"start": 29.76, "end": 34.32, "text": " data in a studio and try to reuse that in our applications."}, {"start": 34.32, "end": 39.68, "text": " Previous techniques have tackled quadruped control and we can even teach bipeds to interact"}, {"start": 39.68, "end": 42.32, "text": " with their environment in a realistic manner."}, {"start": 42.32, "end": 47.040000000000006, "text": " Today we will have a look at an absolutely magnificent piece of work where the authors"}, {"start": 47.040000000000006, "end": 53.44, "text": " carved out a smaller subproblem and made a solution for it that is truly second to none."}, {"start": 53.44, "end": 58.400000000000006, "text": " And this subproblem is simulating virtual characters playing basketball."}, {"start": 58.4, "end": 62.96, "text": " Like with previous works, we are looking for realism in the movement and for games it is"}, {"start": 62.96, "end": 67.2, "text": " also a requirement that the character responds to our controls well."}, {"start": 67.2, "end": 73.52, "text": " However, the key challenge is that all we are given is 3 hours of unstructured motion capture"}, {"start": 73.52, "end": 74.72, "text": " data."}, {"start": 74.72, "end": 80.24, "text": " That is next to nothing and from this next to nothing, a learning algorithm has to learn"}, {"start": 80.24, "end": 85.72, "text": " to understand these motions so well that it can weave them together even when a specific"}, {"start": 85.72, "end": 89.16, "text": " movement combination is not present in this data."}, {"start": 89.16, "end": 91.0, "text": " That is quite a challenge."}, {"start": 91.0, "end": 96.36, "text": " Compared to many other works, this data is really not a lot so I am excited to see what"}, {"start": 96.36, "end": 99.72, "text": " value we are getting out of these 3 hours."}, {"start": 99.72, "end": 105.68, "text": " At first I thought we would get only very rudimentary motions and boy was I dead wrong on that"}, {"start": 105.68, "end": 106.68, "text": " one."}, {"start": 106.68, "end": 111.72, "text": " We have control over this character and can perform these elaborate maneuvers and it remains"}, {"start": 111.72, "end": 118.28, "text": " very responsive even if we mesh the controller like a madman producing these sharp turns."}, {"start": 118.28, "end": 121.68, "text": " As you see it can handle these cases really well."}, {"start": 121.68, "end": 127.03999999999999, "text": " And not only that but it is so well done we can even dribble through a set of obstacles"}, {"start": 127.03999999999999, "end": 130.8, "text": " leading to a responsive and enjoyable gameplay."}, {"start": 130.8, "end": 136.96, "text": " Now about these dribbling behaviors, do we get only one boring motion or not?"}, {"start": 136.96, "end": 142.28, "text": " Not at all it was able to mine out not just one but many kinds of dribbling motions and"}, {"start": 142.28, "end": 147.96, "text": " is able to weave them into other moves as soon as we interact with the controller."}, {"start": 147.96, "end": 154.12, "text": " This is already very convincing especially from just 3 hours of unstructured motion capture"}, {"start": 154.12, "end": 155.12, "text": " data."}, {"start": 155.12, "end": 158.12, "text": " But this paper is just getting started."}, {"start": 158.12, "end": 163.52, "text": " Now hold on to your papers because we can also shoot and catch the ball, move it around"}, {"start": 163.52, "end": 168.48000000000002, "text": " that is very surprising because it has looked at so little shooting data."}, {"start": 168.48000000000002, "end": 172.88, "text": " Let's see yes less than 7 minutes."}, {"start": 172.88, "end": 178.64000000000001, "text": " My goodness and it keeps going what I have found even more surprising is that it can handle"}, {"start": 178.64000000000001, "end": 183.72, "text": " unexpected movements which I find to be even more remarkable given the limited training"}, {"start": 183.72, "end": 184.72, "text": " data."}, {"start": 184.72, "end": 189.76000000000002, "text": " These crazy corner cases are typically learnable when they are available in abundance in the"}, {"start": 189.76000000000002, "end": 192.92000000000002, "text": " training data which is not the case here."}, {"start": 192.92, "end": 194.11999999999998, "text": " Amazing."}, {"start": 194.11999999999998, "end": 199.0, "text": " When we compare these motions to a previous method we see that both the character and"}, {"start": 199.0, "end": 201.79999999999998, "text": " the ball's movement is much more lively."}, {"start": 201.79999999999998, "end": 207.27999999999997, "text": " For instance here you can see that the face function neural network pfnn in short almost"}, {"start": 207.27999999999997, "end": 211.92, "text": " makes it seem like the ball has to stick to the hand of the player for an unhealthy amount"}, {"start": 211.92, "end": 215.88, "text": " of time to be able to create these motions."}, {"start": 215.88, "end": 221.56, "text": " It doesn't happen at all with the new technique and remember this new method is also much more"}, {"start": 221.56, "end": 227.16, "text": " responsive to the player's controls and thus more enjoyable not only to look at but to"}, {"start": 227.16, "end": 228.24, "text": " play with."}, {"start": 228.24, "end": 232.84, "text": " This is an aspect that is hard to measure but it is not to be underestimated in the"}, {"start": 232.84, "end": 235.08, "text": " general playing experience."}, {"start": 235.08, "end": 240.8, "text": " Just imagine what this research area will be capable of not in a decade but just two more"}, {"start": 240.8, "end": 242.4, "text": " papers down the line."}, {"start": 242.4, "end": 243.4, "text": " Loving it."}, {"start": 243.4, "end": 249.2, "text": " Now at the start of the video I noted that the authors carved out a small use case which"}, {"start": 249.2, "end": 255.88, "text": " is training an AI to weave together basketball motion capture data in a manner that is both"}, {"start": 255.88, "end": 258.44, "text": " realistic and controllable."}, {"start": 258.44, "end": 264.2, "text": " However, many times in research we look at a specialized problem and during that journey"}, {"start": 264.2, "end": 269.24, "text": " we learn general concepts that can be applied to other problems as well."}, {"start": 269.24, "end": 274.36, "text": " That is exactly what happened here as you see parts of this technique can be generalized"}, {"start": 274.36, "end": 276.59999999999997, "text": " for quadruped control as well."}, {"start": 276.6, "end": 282.12, "text": " Because good boy is pacing and running around beautifully."}, {"start": 282.12, "end": 287.72, "text": " And you guessed right our favorite biped from the previous paper is also making an introduction."}, {"start": 287.72, "end": 292.96000000000004, "text": " I am absolutely spellbound by this work and I hope that now you are too."}, {"start": 292.96000000000004, "end": 297.92, "text": " Can't wait to see this implemented in newer games and other real time applications."}, {"start": 297.92, "end": 299.64000000000004, "text": " What a time to be alive."}, {"start": 299.64000000000004, "end": 302.8, "text": " This episode has been supported by weights and biases."}, {"start": 302.8, "end": 307.76, "text": " In this post they show you how to use their system with PyTorch lightning and decouple"}, {"start": 307.76, "end": 312.56, "text": " your science code from your engineering code and visualize your models."}, {"start": 312.56, "end": 317.92, "text": " You can even try an example in an interactive notebook through the link in the video description."}, {"start": 317.92, "end": 322.6, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 322.6, "end": 327.40000000000003, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 327.4, "end": 334.12, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 334.12, "end": 339.08, "text": " And the best part is that if you have an open source academic or personal project you"}, {"start": 339.08, "end": 341.2, "text": " can use their tools for free."}, {"start": 341.2, "end": 343.71999999999997, "text": " It really is as good as it gets."}, {"start": 343.71999999999997, "end": 349.84, "text": " Make sure to visit them through wnb.com slash papers or click the link in the video description"}, {"start": 349.84, "end": 353.15999999999997, "text": " to start tracking your experiments in five minutes."}, {"start": 353.16, "end": 363.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ICr6xi9wA94
An AI Learned To See Through Obstructions! 👀
❤️ Check out Snap's Residency Program and apply here: https://lensstudio.snapchat.com/snap-ar-creator-residency-program/?utm_source=twominutepapers&utm_medium=video&utm_campaign=tmp_ml_residency ❤️ Try Snap's Lens Studio here: https://lensstudio.snapchat.com/ 📝 The paper "Learning to See Through Obstructions" is available here: https://alex04072000.github.io/ObstructionRemoval/ https://github.com/alex04072000/ObstructionRemoval 📝 Try it out here: https://colab.research.google.com/drive/1iOKknc0dePekUH2TEh28EhcRPCS1mgwz  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-451982/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Kato Jona-Ifahir. Approximately two years ago, we covered the work where a learning-based algorithm was able to read the Wi-Fi signals in a room to not only locate a person in a building, but even estimate their pose. An additional property of this method was that, as you see here, it does not look at images, but radio signals which also traverse in the dark and therefore, this pose estimation also works well in poor lighting conditions. Today's paper offers a learning-based method for a different, more practical data completion problem and it mainly works on image sequences. You see, we can give this one a short image sequence with obstructions, for instance, the fence here. And it is able to find and remove this obstruction, and not only that, but it can also show us what is exactly behind the fence. How is that even possible? Well, note that we mentioned that the input is not an image, but an image sequence, a short video, if you will. This contains the scene from different viewpoints and is one of the typical cases where if we would give all this data to a human, this human would take a long, long time, but would be able to reconstruct what is exactly behind the fence because this data was visible from other viewpoints. But of course, clearly, this approach would be prohibitively slow and expensive. The cool thing here is that this learning-based method is capable of doing this automatically. But it does not stop there. I was really surprised to find out that it even works for video outputs as well, so if you did not have a clear sight of the tiger in the zoo, do not despair. Just use this method and there you go. When looking at the results of techniques like this, I always try to look only at the output and try to guess where the fence was, obstructing it. With many simpler image-impainting techniques, this is easy to tell if you look for it, but here I can't see a trace. Can you? Let me know in the comments. Admittedly, the resolution of this video is not very high, but the results look very reassuring. This can also perform reflection removal and some of the input images are highly contaminated by these reflected objects. Let's have a look at some results. You can see here how the technique decomposes the input into two images, one with the reflection and one without. The results are clearly not perfect, but they are easily good enough to make my brain focus on the real background without being distracted by the reflections. This was not the case with the input at all. Bravo! This use case can also be extended for videos and I wonder how much temporal coherence I can expect in the output. In other words, if the technique solves the adjacent frames too differently, flickering is introduced in the video and this effect is the bane of many techniques that are otherwise really good on still images. Let's have a look. There is a tiny bit of flickering, but the results are surprisingly consistent. It also does quite well when compared to previous methods, especially when we are able to provide multiple images as an input. Now note that it says hours without online optimization. What could that mean? This online optimization step is a computationally expensive way to further improve separation in the outputs and with that the authors propose a quicker and a slower version of the technique. This one, without the online optimization step, runs in just a few seconds and if we add this step we will have to wait approximately 15 minutes. I had to read the table several times because researchers typically bring the best version of their technique to the comparisons and it is not the case here. Even the quicker version smokes the competition. Loving it. Note that if you have a look at the paper in the video description, there are, of course, more detailed comparisons against other methods as well. If these AR glasses that we hear so much about come to fruition in the next few years, having an algorithm for real time, glare, reflection and obstruction removal would be beyond amazing, which really live in a science fiction world. What a time to be alive. This episode has been supported by Snap Inc. What you see here is Snap ML, a framework that helps you bring your own machine learning models to Snapchats AR lenses. You can build augmented reality experiences for Snapchats hundreds of millions of users and help them see the world through a different lens. You can also apply to Snap's AR Creator Residency program with a proposal of how you would use Lens Studio for a creative project. If selected, you could receive a grant between 1 to $5,000 and work with Snap's technical and creative teams to bring your ideas to life. It doesn't get any better than that. Make sure to go to the link in the video description and apply for their Residency program and try Snap ML today. Our thanks to Snap Inc for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Kato Jona-Ifahir."}, {"start": 4.64, "end": 9.92, "text": " Approximately two years ago, we covered the work where a learning-based algorithm was able"}, {"start": 9.92, "end": 16.16, "text": " to read the Wi-Fi signals in a room to not only locate a person in a building, but even"}, {"start": 16.16, "end": 19.240000000000002, "text": " estimate their pose."}, {"start": 19.240000000000002, "end": 24.72, "text": " An additional property of this method was that, as you see here, it does not look at images,"}, {"start": 24.72, "end": 30.18, "text": " but radio signals which also traverse in the dark and therefore, this pose estimation"}, {"start": 30.18, "end": 33.84, "text": " also works well in poor lighting conditions."}, {"start": 33.84, "end": 38.84, "text": " Today's paper offers a learning-based method for a different, more practical data completion"}, {"start": 38.84, "end": 42.58, "text": " problem and it mainly works on image sequences."}, {"start": 42.58, "end": 47.68, "text": " You see, we can give this one a short image sequence with obstructions, for instance,"}, {"start": 47.68, "end": 49.019999999999996, "text": " the fence here."}, {"start": 49.019999999999996, "end": 54.519999999999996, "text": " And it is able to find and remove this obstruction, and not only that, but it can also show us"}, {"start": 54.52, "end": 57.32, "text": " what is exactly behind the fence."}, {"start": 57.32, "end": 59.24, "text": " How is that even possible?"}, {"start": 59.24, "end": 64.72, "text": " Well, note that we mentioned that the input is not an image, but an image sequence, a"}, {"start": 64.72, "end": 66.68, "text": " short video, if you will."}, {"start": 66.68, "end": 71.4, "text": " This contains the scene from different viewpoints and is one of the typical cases where if we"}, {"start": 71.4, "end": 77.32000000000001, "text": " would give all this data to a human, this human would take a long, long time, but would"}, {"start": 77.32000000000001, "end": 82.64, "text": " be able to reconstruct what is exactly behind the fence because this data was visible"}, {"start": 82.64, "end": 84.48, "text": " from other viewpoints."}, {"start": 84.48, "end": 89.72, "text": " But of course, clearly, this approach would be prohibitively slow and expensive."}, {"start": 89.72, "end": 96.96000000000001, "text": " The cool thing here is that this learning-based method is capable of doing this automatically."}, {"start": 96.96000000000001, "end": 98.68, "text": " But it does not stop there."}, {"start": 98.68, "end": 104.16, "text": " I was really surprised to find out that it even works for video outputs as well, so if"}, {"start": 104.16, "end": 108.56, "text": " you did not have a clear sight of the tiger in the zoo, do not despair."}, {"start": 108.56, "end": 111.04, "text": " Just use this method and there you go."}, {"start": 111.04, "end": 115.16000000000001, "text": " When looking at the results of techniques like this, I always try to look only at the"}, {"start": 115.16000000000001, "end": 119.52000000000001, "text": " output and try to guess where the fence was, obstructing it."}, {"start": 119.52000000000001, "end": 124.52000000000001, "text": " With many simpler image-impainting techniques, this is easy to tell if you look for it, but"}, {"start": 124.52000000000001, "end": 127.36000000000001, "text": " here I can't see a trace."}, {"start": 127.36000000000001, "end": 128.36, "text": " Can you?"}, {"start": 128.36, "end": 129.8, "text": " Let me know in the comments."}, {"start": 129.8, "end": 134.56, "text": " Admittedly, the resolution of this video is not very high, but the results look very"}, {"start": 134.56, "end": 135.96, "text": " reassuring."}, {"start": 135.96, "end": 141.68, "text": " This can also perform reflection removal and some of the input images are highly contaminated"}, {"start": 141.68, "end": 143.8, "text": " by these reflected objects."}, {"start": 143.8, "end": 145.56, "text": " Let's have a look at some results."}, {"start": 145.56, "end": 151.72, "text": " You can see here how the technique decomposes the input into two images, one with the reflection"}, {"start": 151.72, "end": 153.12, "text": " and one without."}, {"start": 153.12, "end": 157.88, "text": " The results are clearly not perfect, but they are easily good enough to make my brain"}, {"start": 157.88, "end": 163.04000000000002, "text": " focus on the real background without being distracted by the reflections."}, {"start": 163.04000000000002, "end": 165.72, "text": " This was not the case with the input at all."}, {"start": 165.72, "end": 166.72, "text": " Bravo!"}, {"start": 166.72, "end": 172.12, "text": " This use case can also be extended for videos and I wonder how much temporal coherence"}, {"start": 172.12, "end": 174.2, "text": " I can expect in the output."}, {"start": 174.2, "end": 179.36, "text": " In other words, if the technique solves the adjacent frames too differently, flickering"}, {"start": 179.36, "end": 184.6, "text": " is introduced in the video and this effect is the bane of many techniques that are otherwise"}, {"start": 184.6, "end": 186.68, "text": " really good on still images."}, {"start": 186.68, "end": 188.44, "text": " Let's have a look."}, {"start": 188.44, "end": 195.12, "text": " There is a tiny bit of flickering, but the results are surprisingly consistent."}, {"start": 195.12, "end": 200.4, "text": " It also does quite well when compared to previous methods, especially when we are able to provide"}, {"start": 200.4, "end": 203.56, "text": " multiple images as an input."}, {"start": 203.56, "end": 208.16, "text": " Now note that it says hours without online optimization."}, {"start": 208.16, "end": 209.76, "text": " What could that mean?"}, {"start": 209.76, "end": 215.16, "text": " This online optimization step is a computationally expensive way to further improve separation"}, {"start": 215.16, "end": 220.96, "text": " in the outputs and with that the authors propose a quicker and a slower version of the"}, {"start": 220.96, "end": 222.04000000000002, "text": " technique."}, {"start": 222.04, "end": 227.84, "text": " This one, without the online optimization step, runs in just a few seconds and if we add"}, {"start": 227.84, "end": 231.79999999999998, "text": " this step we will have to wait approximately 15 minutes."}, {"start": 231.79999999999998, "end": 236.76, "text": " I had to read the table several times because researchers typically bring the best version"}, {"start": 236.76, "end": 241.28, "text": " of their technique to the comparisons and it is not the case here."}, {"start": 241.28, "end": 244.48, "text": " Even the quicker version smokes the competition."}, {"start": 244.48, "end": 245.48, "text": " Loving it."}, {"start": 245.48, "end": 250.23999999999998, "text": " Note that if you have a look at the paper in the video description, there are, of course,"}, {"start": 250.24, "end": 253.36, "text": " more detailed comparisons against other methods as well."}, {"start": 253.36, "end": 258.6, "text": " If these AR glasses that we hear so much about come to fruition in the next few years,"}, {"start": 258.6, "end": 264.24, "text": " having an algorithm for real time, glare, reflection and obstruction removal would be beyond"}, {"start": 264.24, "end": 268.2, "text": " amazing, which really live in a science fiction world."}, {"start": 268.2, "end": 270.04, "text": " What a time to be alive."}, {"start": 270.04, "end": 272.88, "text": " This episode has been supported by Snap Inc."}, {"start": 272.88, "end": 277.84000000000003, "text": " What you see here is Snap ML, a framework that helps you bring your own machine learning"}, {"start": 277.84, "end": 280.59999999999997, "text": " models to Snapchats AR lenses."}, {"start": 280.59999999999997, "end": 285.71999999999997, "text": " You can build augmented reality experiences for Snapchats hundreds of millions of users"}, {"start": 285.71999999999997, "end": 288.52, "text": " and help them see the world through a different lens."}, {"start": 288.52, "end": 293.91999999999996, "text": " You can also apply to Snap's AR Creator Residency program with a proposal of how you would"}, {"start": 293.91999999999996, "end": 296.91999999999996, "text": " use Lens Studio for a creative project."}, {"start": 296.91999999999996, "end": 303.79999999999995, "text": " If selected, you could receive a grant between 1 to $5,000 and work with Snap's technical"}, {"start": 303.79999999999995, "end": 306.96, "text": " and creative teams to bring your ideas to life."}, {"start": 306.96, "end": 309.28, "text": " It doesn't get any better than that."}, {"start": 309.28, "end": 313.88, "text": " Make sure to go to the link in the video description and apply for their Residency program and"}, {"start": 313.88, "end": 316.0, "text": " try Snap ML today."}, {"start": 316.0, "end": 319.76, "text": " Our thanks to Snap Inc for helping us make better videos for you."}, {"start": 319.76, "end": 348.28, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=TrdmCkmK3y4
This AI Creates Dogs From Cats…And More!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their instrumentation of this paper is available here: https://app.wandb.ai/stacey/stargan/reports/Cute-Animals-and-Post-Modern-Style-Transfer%3A-Stargan-V2-for-Multi-Domain-Image-Synthesis---VmlldzoxNzcwODQ 📝 The paper "StarGAN v2: Diverse Image Synthesis for Multiple Domains" is available here: - Paper: https://arxiv.org/abs/1912.01865 - Code: https://github.com/clovaai/stargan-v2 - Youtube Video: https://youtu.be/0EVh5Ki4dIY The paper with the latent space material synthesis is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today, we have a selection of learning-based techniques that can generate images of photorealistic human faces for people that don't exist. These techniques have come a long way over the last few years, so much so that we can now even edit these images to our liking by, for instance, putting a smile on their faces, taking them older or younger, adding or removing a beard, and more. However, most of these techniques are still lacking in two things. One is diversity of outputs and two generalization to multiple domains. Typically, the ones that work on multiple domains don't perform too well on most of them. This new technique is called StarGand2 and addresses both of these issues. Let's start with the humans. In the footage here, you see a lot of interpolation between test subjects, which means that we start out from a source person and generate images that morph them into the target subjects, not in any way, but in a way that all of the intermediate images are believable. In these results, many attributes from the input subject, such as pose, nose type, mouth shape, and position, are also reflected on the output. I like how the motion of the images on the left reflects the state of the interpolation. As this slowly takes place, we can witness how the reference person grows out of a beard. But, we are not nearly done yet. We noted that another great advantage of this technique is that it works for multiple domains, and this means, of course, none other than us looking at cats morphing into dogs and other animals. In these cases, I see that the algorithm picks up the gaze direction, so this generalizes to even animals. That's great. What is even more great is that the face shape of the tiger appears to have been translated to the photo of this cat, and if we have a bigger cat as an input, the output will also give us this lovely and a little plump creature. And look, here the cat in the input is occluded in this target image, but, that is not translated to the output image. The AI knows that this is not part of the cat, but an occlusion. Imagine what it would take to prepare a handcrafted algorithm to distinguish these features. My goodness. And now, onto dogs. What is really cool is that in this case, bend the ears, have their own meaning, and we get several versions of the same dog breed with or without them. And it can handle a variety of other animals too. I could look at these all day. And now, to understand why this works so well, we first have to understand what a latent space is. Here you see an example of a latent space that was created to be able to browse through fonts and even generate new ones. This method essentially tries to look at a bunch of already existing fonts and tries to boil them down into the essence of what makes them different. It is a simpler, often incomplete, but more manageable representation for a given domain. This domain can be almost anything, for instance, you see another technique that does something similar with material models. Now the key difference in this new work compared to previous techniques is that it creates not one latent space, but several of these latent spaces for different domains. As a result, it can not only generate images in all of these domains, but can also translate different features, for instance ears, eyes, noses from a cat to a dog or a cheetah in a way that makes sense. And the results look like absolute witchcraft. Now since the look on this cheetah's face indicates that it has had enough of this video, just one more example before we go. As a possible failure case, have a look at the ears of this cat. It seems to be in a peculiar, midway land between a pointy and a bent ear, but it doesn't quite look like any of them. What do you think? Maybe some of you cat people can weigh in on this. Let me know in the comments. What you see here is an instrumentation of this exact paper we have talked about, which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you have an open source, academic, or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers, or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 4.8, "end": 10.48, "text": " Today, we have a selection of learning-based techniques that can generate images of photorealistic"}, {"start": 10.48, "end": 13.76, "text": " human faces for people that don't exist."}, {"start": 13.76, "end": 19.3, "text": " These techniques have come a long way over the last few years, so much so that we can now"}, {"start": 19.3, "end": 25.44, "text": " even edit these images to our liking by, for instance, putting a smile on their faces,"}, {"start": 25.44, "end": 30.520000000000003, "text": " taking them older or younger, adding or removing a beard, and more."}, {"start": 30.520000000000003, "end": 35.44, "text": " However, most of these techniques are still lacking in two things."}, {"start": 35.44, "end": 40.96, "text": " One is diversity of outputs and two generalization to multiple domains."}, {"start": 40.96, "end": 46.760000000000005, "text": " Typically, the ones that work on multiple domains don't perform too well on most of them."}, {"start": 46.760000000000005, "end": 51.84, "text": " This new technique is called StarGand2 and addresses both of these issues."}, {"start": 51.84, "end": 53.68000000000001, "text": " Let's start with the humans."}, {"start": 53.68, "end": 59.64, "text": " In the footage here, you see a lot of interpolation between test subjects, which means that we start"}, {"start": 59.64, "end": 65.8, "text": " out from a source person and generate images that morph them into the target subjects,"}, {"start": 65.8, "end": 71.96000000000001, "text": " not in any way, but in a way that all of the intermediate images are believable."}, {"start": 71.96000000000001, "end": 77.68, "text": " In these results, many attributes from the input subject, such as pose, nose type, mouth"}, {"start": 77.68, "end": 82.88, "text": " shape, and position, are also reflected on the output."}, {"start": 82.88, "end": 88.72, "text": " I like how the motion of the images on the left reflects the state of the interpolation."}, {"start": 88.72, "end": 93.75999999999999, "text": " As this slowly takes place, we can witness how the reference person grows out of a beard."}, {"start": 93.75999999999999, "end": 96.39999999999999, "text": " But, we are not nearly done yet."}, {"start": 96.39999999999999, "end": 101.47999999999999, "text": " We noted that another great advantage of this technique is that it works for multiple domains,"}, {"start": 101.47999999999999, "end": 107.56, "text": " and this means, of course, none other than us looking at cats morphing into dogs and"}, {"start": 107.56, "end": 108.91999999999999, "text": " other animals."}, {"start": 108.92, "end": 114.32000000000001, "text": " In these cases, I see that the algorithm picks up the gaze direction, so this generalizes"}, {"start": 114.32000000000001, "end": 116.4, "text": " to even animals."}, {"start": 116.4, "end": 117.48, "text": " That's great."}, {"start": 117.48, "end": 123.0, "text": " What is even more great is that the face shape of the tiger appears to have been translated"}, {"start": 123.0, "end": 128.68, "text": " to the photo of this cat, and if we have a bigger cat as an input, the output will also"}, {"start": 128.68, "end": 133.28, "text": " give us this lovely and a little plump creature."}, {"start": 133.28, "end": 138.88, "text": " And look, here the cat in the input is occluded in this target image, but,"}, {"start": 138.88, "end": 141.68, "text": " that is not translated to the output image."}, {"start": 141.68, "end": 146.44, "text": " The AI knows that this is not part of the cat, but an occlusion."}, {"start": 146.44, "end": 152.44, "text": " Imagine what it would take to prepare a handcrafted algorithm to distinguish these features."}, {"start": 152.44, "end": 153.79999999999998, "text": " My goodness."}, {"start": 153.79999999999998, "end": 155.92, "text": " And now, onto dogs."}, {"start": 155.92, "end": 160.56, "text": " What is really cool is that in this case, bend the ears, have their own meaning, and"}, {"start": 160.56, "end": 166.84, "text": " we get several versions of the same dog breed with or without them."}, {"start": 166.84, "end": 170.36, "text": " And it can handle a variety of other animals too."}, {"start": 170.36, "end": 172.8, "text": " I could look at these all day."}, {"start": 172.8, "end": 178.32, "text": " And now, to understand why this works so well, we first have to understand what a latent"}, {"start": 178.32, "end": 179.64000000000001, "text": " space is."}, {"start": 179.64000000000001, "end": 184.36, "text": " Here you see an example of a latent space that was created to be able to browse through"}, {"start": 184.36, "end": 187.64000000000001, "text": " fonts and even generate new ones."}, {"start": 187.64000000000001, "end": 192.88, "text": " This method essentially tries to look at a bunch of already existing fonts and tries to"}, {"start": 192.88, "end": 196.84, "text": " boil them down into the essence of what makes them different."}, {"start": 196.84, "end": 203.4, "text": " It is a simpler, often incomplete, but more manageable representation for a given domain."}, {"start": 203.4, "end": 208.4, "text": " This domain can be almost anything, for instance, you see another technique that does something"}, {"start": 208.4, "end": 211.12, "text": " similar with material models."}, {"start": 211.12, "end": 216.07999999999998, "text": " Now the key difference in this new work compared to previous techniques is that it creates"}, {"start": 216.07999999999998, "end": 222.28, "text": " not one latent space, but several of these latent spaces for different domains."}, {"start": 222.28, "end": 227.6, "text": " As a result, it can not only generate images in all of these domains, but can also translate"}, {"start": 227.6, "end": 234.56, "text": " different features, for instance ears, eyes, noses from a cat to a dog or a cheetah in a"}, {"start": 234.56, "end": 236.68, "text": " way that makes sense."}, {"start": 236.68, "end": 240.4, "text": " And the results look like absolute witchcraft."}, {"start": 240.4, "end": 245.56, "text": " Now since the look on this cheetah's face indicates that it has had enough of this video,"}, {"start": 245.56, "end": 248.28, "text": " just one more example before we go."}, {"start": 248.28, "end": 252.6, "text": " As a possible failure case, have a look at the ears of this cat."}, {"start": 252.6, "end": 259.36, "text": " It seems to be in a peculiar, midway land between a pointy and a bent ear, but it doesn't"}, {"start": 259.36, "end": 261.24, "text": " quite look like any of them."}, {"start": 261.24, "end": 262.24, "text": " What do you think?"}, {"start": 262.24, "end": 265.24, "text": " Maybe some of you cat people can weigh in on this."}, {"start": 265.24, "end": 267.28, "text": " Let me know in the comments."}, {"start": 267.28, "end": 272.08, "text": " What you see here is an instrumentation of this exact paper we have talked about, which"}, {"start": 272.08, "end": 274.48, "text": " was made by weights and biases."}, {"start": 274.48, "end": 280.04, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 280.04, "end": 284.68, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 284.68, "end": 289.48, "text": " Their system is designed to save you a ton of time and money, and it is actively used"}, {"start": 289.48, "end": 296.20000000000005, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 296.20000000000005, "end": 301.16, "text": " And the best part is that if you have an open source, academic, or personal project,"}, {"start": 301.16, "end": 303.28000000000003, "text": " you can use their tools for free."}, {"start": 303.28, "end": 305.79999999999995, "text": " It really is as good as it gets."}, {"start": 305.79999999999995, "end": 311.91999999999996, "text": " Make sure to visit them through wnbe.com slash papers, or click the link in the video description"}, {"start": 311.91999999999996, "end": 315.23999999999995, "text": " to start tracking your experiments in five minutes."}, {"start": 315.23999999999995, "end": 319.84, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 319.84, "end": 321.03999999999996, "text": " better videos for you."}, {"start": 321.04, "end": 333.6, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MrIbQ0pIFOg
This AI Creates Beautiful 3D Photographs!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their instrumentation of this paper is available here: https://app.wandb.ai/authors/3D-Inpainting/reports/3D-Image-Inpainting--VmlldzoxNzIwNTY 📝 The paper "3D Photography using Context-aware Layered Depth Inpainting" is available here: https://shihmengli.github.io/3D-Photo-Inpainting/ Try it out! Weights & Biases notebook: https://colab.research.google.com/drive/1yNkew-QUtVQPG8PbwWWMLKmnVlLOIfTs?usp=sharing Or try it out here - Author notebook: https://colab.research.google.com/drive/1706ToQrkIZshRSJSHvZ1RuCiM__YX3Bz#scrollTo=wPvkMT0msIJB  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir. We hear more and more about RGBD images these days. These are photographs that are endowed with depth information which enable us to do many wondrous things. For instance, this method was used to end out self-driving cars with depth information and word reasonably well. And this other one provides depth maps that are so consistent we can even add some AR effects to it and today's paper is going to show what 3D photography is. However, first we need not only color but depth information in our images to perform these. You see, phones with depth scanners already exist and even more are coming as soon as this year. But even if you have a device that only gives you 2D color images, don't despair, there is plenty of research on how we can estimate these depth maps even if we have very limited information. And with proper depth information we can now create these 3D photographs where we get even more information out of one still image. We can look behind objects and see things that we wouldn't see otherwise. Beautiful parallax effects appear as objects at different distances move different amounts as we move the camera around. You see that the foreground changes a great deal. The buildings in the background less so and the hills behind them even less so. These photos truly come alive with this new method. An earlier algorithm, the legendary patch match method from more than a decade ago, could perform something that we call image in painting. Image in painting means looking at what we see in these images and trying to fill in missing information with data that makes sense. The key difference here is that this new technique uses a learning method and does this image in painting in 3D and it not only fills in color but depth information as well. What a crazy, amazing idea. However, this is not the first method to perform this so how does it compare to other research works. Let's have a look together. Previous methods have a great deal of warping and distortions on the bathtub here. And if you look at the new method you see that it is much cleaner. There is still a tiny bit of warping but it is significantly better. The dog head here with this previous method seems to be bobbing around a great deal. While the other methods also have some problems with it, look at these two. And if you look at how the new method handles it, it is significantly more stable and you see that these previous techniques are from just one or two years ago. It is unbelievable how far we have come since. Bravo. So this was a qualitative comparison or in other words we looked at the results. What about the quantitative differences? What do the numbers say? Look at the PSNR column here, this means the peak signal to noise ratio. This is subject to maximization as the up arrow denotes here. The higher the better. The difference is between one half to two and a half points when compared to previous methods, which does not sound like a lot at all. So what happened here? Note that PSNR is not a linear but a logarithmic scale. So this means that a small numeric difference typically translates to a great deal of difference in the images, even if the numeric difference is just 0.5 points on the PSNR scale. However, if you look at SSIM, the structure of similarity metric, all of them are quite similar and the previous technique appears to be even winning here. But this was a method that worked the dog head and the individual comparisons, the new method came out significantly better than this. So what is going on here? Well, have a look at this metric, LPIPS, which was developed at the UC Berkeley, OpenAI, and Adobe Research. At the risk of simplifying the situation, this uses a neural network to look at an image and uses its inner representation to decide how close the two images are to each other. And loosely speaking, it kind of thinks about the differences as we humans do and is an excellent tool to compare images. And sure enough, this also concludes that the new method performs best. However, this method is still not perfect. There is some flickering going on behind these fences. The transparency of the glass here isn't perfect, but witnessing this huge leap in the quality of results in such little time is truly a sight to behold. What a time to be alive! I started this series to make people feel how I feel when I read these papers and I really hope that it goes through with this paper. Absolutely amazing! What is even more amazing is that with a tiny bit of technical knowledge, you can run the source code in your browser, so make sure to have a look at the link in the video description. Let me know in the comments how it went. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support. And I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 8.28, "text": " We hear more and more about RGBD images these days."}, {"start": 8.28, "end": 15.0, "text": " These are photographs that are endowed with depth information which enable us to do many wondrous things."}, {"start": 15.0, "end": 22.72, "text": " For instance, this method was used to end out self-driving cars with depth information and word reasonably well."}, {"start": 22.72, "end": 34.04, "text": " And this other one provides depth maps that are so consistent we can even add some AR effects to it and today's paper is going to show what 3D photography is."}, {"start": 34.04, "end": 40.84, "text": " However, first we need not only color but depth information in our images to perform these."}, {"start": 40.84, "end": 47.519999999999996, "text": " You see, phones with depth scanners already exist and even more are coming as soon as this year."}, {"start": 47.52, "end": 60.2, "text": " But even if you have a device that only gives you 2D color images, don't despair, there is plenty of research on how we can estimate these depth maps even if we have very limited information."}, {"start": 60.2, "end": 69.44, "text": " And with proper depth information we can now create these 3D photographs where we get even more information out of one still image."}, {"start": 69.44, "end": 74.24000000000001, "text": " We can look behind objects and see things that we wouldn't see otherwise."}, {"start": 74.24, "end": 82.39999999999999, "text": " Beautiful parallax effects appear as objects at different distances move different amounts as we move the camera around."}, {"start": 82.39999999999999, "end": 87.11999999999999, "text": " You see that the foreground changes a great deal."}, {"start": 87.11999999999999, "end": 93.44, "text": " The buildings in the background less so and the hills behind them even less so."}, {"start": 93.44, "end": 96.84, "text": " These photos truly come alive with this new method."}, {"start": 96.84, "end": 105.96000000000001, "text": " An earlier algorithm, the legendary patch match method from more than a decade ago, could perform something that we call image in painting."}, {"start": 105.96000000000001, "end": 114.52000000000001, "text": " Image in painting means looking at what we see in these images and trying to fill in missing information with data that makes sense."}, {"start": 114.52000000000001, "end": 126.2, "text": " The key difference here is that this new technique uses a learning method and does this image in painting in 3D and it not only fills in color but depth information as well."}, {"start": 126.2, "end": 128.92000000000002, "text": " What a crazy, amazing idea."}, {"start": 128.92000000000002, "end": 135.32, "text": " However, this is not the first method to perform this so how does it compare to other research works."}, {"start": 135.32, "end": 136.84, "text": " Let's have a look together."}, {"start": 136.84, "end": 142.12, "text": " Previous methods have a great deal of warping and distortions on the bathtub here."}, {"start": 145.08, "end": 149.08, "text": " And if you look at the new method you see that it is much cleaner."}, {"start": 149.08, "end": 157.96, "text": " There is still a tiny bit of warping but it is significantly better."}, {"start": 157.96, "end": 163.96, "text": " The dog head here with this previous method seems to be bobbing around a great deal."}, {"start": 163.96, "end": 172.20000000000002, "text": " While the other methods also have some problems with it, look at these two."}, {"start": 172.2, "end": 182.67999999999998, "text": " And if you look at how the new method handles it, it is significantly more stable and you see that these previous techniques are from just one or two years ago."}, {"start": 182.67999999999998, "end": 186.11999999999998, "text": " It is unbelievable how far we have come since."}, {"start": 186.11999999999998, "end": 187.48, "text": " Bravo."}, {"start": 187.48, "end": 193.16, "text": " So this was a qualitative comparison or in other words we looked at the results."}, {"start": 193.16, "end": 195.64, "text": " What about the quantitative differences?"}, {"start": 195.64, "end": 197.32, "text": " What do the numbers say?"}, {"start": 197.32, "end": 201.95999999999998, "text": " Look at the PSNR column here, this means the peak signal to noise ratio."}, {"start": 201.95999999999998, "end": 206.04, "text": " This is subject to maximization as the up arrow denotes here."}, {"start": 206.04, "end": 207.88, "text": " The higher the better."}, {"start": 207.88, "end": 213.32, "text": " The difference is between one half to two and a half points when compared to previous methods,"}, {"start": 213.32, "end": 215.79999999999998, "text": " which does not sound like a lot at all."}, {"start": 215.79999999999998, "end": 217.79999999999998, "text": " So what happened here?"}, {"start": 217.79999999999998, "end": 222.12, "text": " Note that PSNR is not a linear but a logarithmic scale."}, {"start": 222.12, "end": 228.28, "text": " So this means that a small numeric difference typically translates to a great deal of difference in the images,"}, {"start": 228.28, "end": 233.48000000000002, "text": " even if the numeric difference is just 0.5 points on the PSNR scale."}, {"start": 233.48000000000002, "end": 237.88, "text": " However, if you look at SSIM, the structure of similarity metric,"}, {"start": 237.88, "end": 243.24, "text": " all of them are quite similar and the previous technique appears to be even winning here."}, {"start": 243.24, "end": 247.48000000000002, "text": " But this was a method that worked the dog head and the individual comparisons,"}, {"start": 247.48000000000002, "end": 251.16, "text": " the new method came out significantly better than this."}, {"start": 251.16, "end": 253.64, "text": " So what is going on here?"}, {"start": 253.64, "end": 262.04, "text": " Well, have a look at this metric, LPIPS, which was developed at the UC Berkeley, OpenAI, and Adobe Research."}, {"start": 262.04, "end": 267.15999999999997, "text": " At the risk of simplifying the situation, this uses a neural network to look at an image"}, {"start": 267.15999999999997, "end": 273.96, "text": " and uses its inner representation to decide how close the two images are to each other."}, {"start": 273.96, "end": 282.28, "text": " And loosely speaking, it kind of thinks about the differences as we humans do and is an excellent tool to compare images."}, {"start": 282.28, "end": 287.15999999999997, "text": " And sure enough, this also concludes that the new method performs best."}, {"start": 287.15999999999997, "end": 290.28, "text": " However, this method is still not perfect."}, {"start": 290.28, "end": 295.64, "text": " There is some flickering going on behind these fences."}, {"start": 295.64, "end": 298.84, "text": " The transparency of the glass here isn't perfect,"}, {"start": 298.84, "end": 306.03999999999996, "text": " but witnessing this huge leap in the quality of results in such little time is truly a sight to behold."}, {"start": 306.03999999999996, "end": 307.96, "text": " What a time to be alive!"}, {"start": 307.96, "end": 312.91999999999996, "text": " I started this series to make people feel how I feel when I read these papers"}, {"start": 312.91999999999996, "end": 315.96, "text": " and I really hope that it goes through with this paper."}, {"start": 315.96, "end": 317.64, "text": " Absolutely amazing!"}, {"start": 317.64, "end": 321.55999999999995, "text": " What is even more amazing is that with a tiny bit of technical knowledge,"}, {"start": 321.55999999999995, "end": 323.79999999999995, "text": " you can run the source code in your browser,"}, {"start": 323.79999999999995, "end": 327.32, "text": " so make sure to have a look at the link in the video description."}, {"start": 327.32, "end": 329.48, "text": " Let me know in the comments how it went."}, {"start": 329.48, "end": 334.2, "text": " What you see here is an instrumentation of this exact paper we have talked about"}, {"start": 334.2, "end": 336.76, "text": " which was made by weights and biases."}, {"start": 336.76, "end": 342.04, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 342.04, "end": 346.6, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 346.6, "end": 350.28, "text": " Their system is designed to save you a ton of time and money"}, {"start": 350.28, "end": 356.36, "text": " and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research,"}, {"start": 356.36, "end": 358.04, "text": " GitHub and more."}, {"start": 358.04, "end": 363.16, "text": " And the best part is that if you have an open source, academic or personal project,"}, {"start": 363.16, "end": 365.16, "text": " you can use their tools for free."}, {"start": 365.16, "end": 367.64, "text": " It really is as good as it gets."}, {"start": 367.64, "end": 371.64, "text": " Make sure to visit them through wnb.com slash papers"}, {"start": 371.64, "end": 377.24, "text": " or click the link in the video description to start tracking your experiments in five minutes."}, {"start": 377.24, "end": 380.44, "text": " Our thanks to weights and biases for their long-standing support"}, {"start": 380.44, "end": 383.24, "text": " and for helping us make better videos for you."}, {"start": 383.24, "end": 385.48, "text": " Thanks for watching and for your generous support."}, {"start": 385.48, "end": 387.48, "text": " And I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wg3upHE8qJw
Can an AI Learn Lip Reading?
❤️ Check out Snap's Residency Program and apply here: https://lensstudio.snapchat.com/snap-ar-creator-residency-program/?utm_source=twominutepapers&utm_medium=video&utm_campaign=tmp_ml_residency ❤️ Try Snap's Lens Studio here: https://lensstudio.snapchat.com/ 📝 The paper "Learning Individual Speaking Styles for Accurate Lip to Speech Synthesis" is available here: http://cvit.iiit.ac.in/research/projects/cvit-projects/speaking-by-observing-lip-movements Our earlier video on the "bag of chips" sound reconstruction is available here: https://www.youtube.com/watch?v=2i1hrywDwPo 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-4814562/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #Lipreading
Dear Fellow Scholars, this is two-minute papers with Dr. Karoizhou-nai-Fehir. When watching science fiction movies, we often encounter crazy devices and technologies that don't really exist or sometimes, once, that are not even possible to make. For instance, reconstructing sound from vibrations would be an excellent example of that and could make a great novel with the Secret Service trying to catch dangerous criminals. Except that it has already been done in real-life research. I think you can imagine how surprised I was when I first saw this paper in 2014 that showcased a result where a camera looks at this bag of chips and from these tiny, tiny vibrations, it could reconstruct the sounds in the room. Let's listen. Mary had a little lamb whose fleece was white as snow and everywhere that Mary went, that lamb was stored to go. I was wrong in looking at these beautiful photos of her and everywhere that Mary was, I was stored to go. Yes, this indeed sounds like science fiction. But 2014 was a long, long time ago and since then we have a selection of powerful learning algorithms and the question is, was the next idea that sounded completely impossible a few years ago, which is now possible? Well, what about looking at silent footage from a speaker and trying to guess what they were saying? Checkmark that sounds absolutely impossible to me, yet this new technique is able to produce the entire tube of this speech after looking at a video footage of the leap movements. Let's listen. Between the wave light, the frequency and the speed of electromagnetic radiation. In fact, the product of the wavelength at the frequency is its speed. Wow. So, the first question is, of course, what was used as the training data? It used a dataset with lecture videos and chess commentary from five speakers and make no mistake, it takes a ton of data from these speakers, about 20 hours from each, but it uses video that was shot in a natural setting, which is something that we have in abundance on YouTube and other places on the internet. Note that the neural network works on the same speakers it was trained on and was able to learn their gestures and leap movements remarkably well. However, this is not the first work attempting to do this, so let's see how it compares to the competition. Set white with i7 soon. Set white with i7 soon. The new one is very close to the true spoken sentence. Let's look at another one. Erfendowance of action-famed decision. Eight field guns were captured in position. Note that there are gestures, a reasonable amount of head movement and other factors that play and the algorithm still does amazingly well. Potential applications of this could be video conferencing in zones where we have to be silent, giving a voice to people with the inability to speak, do to a phonia or other conditions, or potentially fixing a piece of video footage where parts of the speech signal are corrupted. In these cases, the gaps could be filled with such a technique. Look. Let's look at a cell potential of 0.5 volts for an oxidation of bromide by permanganate. The question I have is what pH would cause this voltage? Would it be a pH? Now, let's have a look under the hood. If we visualize the activations within this neural network, we see that it found out that it mainly looks at the mouth of the speaker. That is, of course, not surprising. However, what is surprising is that the other regions, for instance, around the forehead and eyebrows, are also important to the attention mechanism. Perhaps this could mean that it also looks at the gestures of the speaker and uses that information for the speech synthesis. I find this aspect of the work very intriguing and would love to see some additional analysis on that. There is so much more in the paper, for instance, I mentioned giving a voice to people with a phonia which should not be possible because we are training these neural networks for a specific speaker, but with an additional speaker embedding step, it is possible to pair up any speaker with any voice. This is another amazing work that makes me feel like we are living in a science fiction world. I can only imagine what we will be able to do with this technique two more papers down the line. If you have any ideas, feel free to speculate in the comments section below. What a time to be alive! This episode has been supported by Snap Inc. What you see here is Snap ML, a framework that helps you bring your own machine learning models to Snapchat's A or Lenses. You can build augmented reality experiences for Snap Chets hundreds of millions of users and help them see the world through a different lens. You can also apply to Snap's AR Creator Residency program with a proposal of how you would use Lens Studio for a creative project. If selected, you could receive a grant between 1 to $5,000 and work with Snap's technical and creative teams to bring your ideas to life. It doesn't get any better than that. Make sure to go to the link in the video description and apply for their residency program and try Snap ML today. Our thanks to Snap Inc for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karoizhou-nai-Fehir."}, {"start": 4.36, "end": 9.64, "text": " When watching science fiction movies, we often encounter crazy devices and technologies"}, {"start": 9.64, "end": 15.6, "text": " that don't really exist or sometimes, once, that are not even possible to make."}, {"start": 15.6, "end": 21.16, "text": " For instance, reconstructing sound from vibrations would be an excellent example of that"}, {"start": 21.16, "end": 26.560000000000002, "text": " and could make a great novel with the Secret Service trying to catch dangerous criminals."}, {"start": 26.56, "end": 30.64, "text": " Except that it has already been done in real-life research."}, {"start": 30.64, "end": 35.92, "text": " I think you can imagine how surprised I was when I first saw this paper in 2014"}, {"start": 35.92, "end": 39.68, "text": " that showcased a result where a camera looks at this bag of chips"}, {"start": 39.68, "end": 45.16, "text": " and from these tiny, tiny vibrations, it could reconstruct the sounds in the room."}, {"start": 45.16, "end": 46.96, "text": " Let's listen."}, {"start": 46.96, "end": 57.2, "text": " Mary had a little lamb whose fleece was white as snow and everywhere that Mary went, that lamb was stored to go."}, {"start": 57.2, "end": 67.44, "text": " I was wrong in looking at these beautiful photos of her and everywhere that Mary was, I was stored to go."}, {"start": 67.44, "end": 70.6, "text": " Yes, this indeed sounds like science fiction."}, {"start": 70.6, "end": 77.72, "text": " But 2014 was a long, long time ago and since then we have a selection of powerful learning algorithms"}, {"start": 77.72, "end": 83.32, "text": " and the question is, was the next idea that sounded completely impossible a few years ago,"}, {"start": 83.32, "end": 85.24, "text": " which is now possible?"}, {"start": 85.24, "end": 91.8, "text": " Well, what about looking at silent footage from a speaker and trying to guess what they were saying?"}, {"start": 91.8, "end": 98.52, "text": " Checkmark that sounds absolutely impossible to me, yet this new technique is able to produce"}, {"start": 98.52, "end": 103.8, "text": " the entire tube of this speech after looking at a video footage of the leap movements."}, {"start": 103.8, "end": 104.75999999999999, "text": " Let's listen."}, {"start": 104.75999999999999, "end": 110.03999999999999, "text": " Between the wave light, the frequency and the speed of electromagnetic radiation."}, {"start": 110.03999999999999, "end": 113.96, "text": " In fact, the product of the wavelength at the frequency is its speed."}, {"start": 114.91999999999999, "end": 116.19999999999999, "text": " Wow."}, {"start": 116.19999999999999, "end": 121.08, "text": " So, the first question is, of course, what was used as the training data?"}, {"start": 121.08, "end": 126.12, "text": " It used a dataset with lecture videos and chess commentary from five speakers"}, {"start": 126.12, "end": 132.36, "text": " and make no mistake, it takes a ton of data from these speakers, about 20 hours from each,"}, {"start": 132.36, "end": 138.28, "text": " but it uses video that was shot in a natural setting, which is something that we have in abundance"}, {"start": 138.28, "end": 140.68, "text": " on YouTube and other places on the internet."}, {"start": 141.48000000000002, "end": 146.92000000000002, "text": " Note that the neural network works on the same speakers it was trained on and was able to learn"}, {"start": 146.92000000000002, "end": 149.88, "text": " their gestures and leap movements remarkably well."}, {"start": 150.52, "end": 156.04000000000002, "text": " However, this is not the first work attempting to do this, so let's see how it compares"}, {"start": 156.04, "end": 157.16, "text": " to the competition."}, {"start": 160.67999999999998, "end": 162.92, "text": " Set white with i7 soon."}, {"start": 163.95999999999998, "end": 166.04, "text": " Set white with i7 soon."}, {"start": 169.39999999999998, "end": 172.51999999999998, "text": " The new one is very close to the true spoken sentence."}, {"start": 173.16, "end": 174.12, "text": " Let's look at another one."}, {"start": 175.23999999999998, "end": 178.35999999999999, "text": " Erfendowance of action-famed decision."}, {"start": 179.88, "end": 182.92, "text": " Eight field guns were captured in position."}, {"start": 182.92, "end": 193.88, "text": " Note that there are gestures, a reasonable amount of head movement and other factors that play"}, {"start": 193.88, "end": 197.07999999999998, "text": " and the algorithm still does amazingly well."}, {"start": 197.64, "end": 203.07999999999998, "text": " Potential applications of this could be video conferencing in zones where we have to be silent,"}, {"start": 203.07999999999998, "end": 209.07999999999998, "text": " giving a voice to people with the inability to speak, do to a phonia or other conditions,"}, {"start": 209.08, "end": 215.08, "text": " or potentially fixing a piece of video footage where parts of the speech signal are corrupted."}, {"start": 215.08, "end": 218.20000000000002, "text": " In these cases, the gaps could be filled with such a technique."}, {"start": 218.84, "end": 219.08, "text": " Look."}, {"start": 220.20000000000002, "end": 223.48000000000002, "text": " Let's look at a cell potential of 0.5 volts"}, {"start": 224.44, "end": 228.12, "text": " for an oxidation of bromide by permanganate."}, {"start": 229.08, "end": 233.88000000000002, "text": " The question I have is what pH would cause this voltage?"}, {"start": 233.88000000000002, "end": 235.08, "text": " Would it be a pH?"}, {"start": 235.08, "end": 238.44000000000003, "text": " Now, let's have a look under the hood."}, {"start": 238.44000000000003, "end": 245.88000000000002, "text": " If we visualize the activations within this neural network, we see that it found out that it mainly looks at the mouth of the speaker."}, {"start": 246.44, "end": 249.0, "text": " That is, of course, not surprising."}, {"start": 249.0, "end": 255.16000000000003, "text": " However, what is surprising is that the other regions, for instance, around the forehead and eyebrows,"}, {"start": 255.16000000000003, "end": 257.64, "text": " are also important to the attention mechanism."}, {"start": 258.2, "end": 262.84000000000003, "text": " Perhaps this could mean that it also looks at the gestures of the speaker"}, {"start": 262.84, "end": 266.03999999999996, "text": " and uses that information for the speech synthesis."}, {"start": 266.03999999999996, "end": 272.03999999999996, "text": " I find this aspect of the work very intriguing and would love to see some additional analysis on that."}, {"start": 272.67999999999995, "end": 278.67999999999995, "text": " There is so much more in the paper, for instance, I mentioned giving a voice to people with a phonia"}, {"start": 278.67999999999995, "end": 284.03999999999996, "text": " which should not be possible because we are training these neural networks for a specific speaker,"}, {"start": 284.03999999999996, "end": 290.84, "text": " but with an additional speaker embedding step, it is possible to pair up any speaker with any voice."}, {"start": 290.84, "end": 296.84, "text": " This is another amazing work that makes me feel like we are living in a science fiction world."}, {"start": 296.84, "end": 302.03999999999996, "text": " I can only imagine what we will be able to do with this technique two more papers down the line."}, {"start": 302.03999999999996, "end": 306.2, "text": " If you have any ideas, feel free to speculate in the comments section below."}, {"start": 306.2, "end": 308.03999999999996, "text": " What a time to be alive!"}, {"start": 308.03999999999996, "end": 310.91999999999996, "text": " This episode has been supported by Snap Inc."}, {"start": 310.91999999999996, "end": 318.67999999999995, "text": " What you see here is Snap ML, a framework that helps you bring your own machine learning models to Snapchat's A or Lenses."}, {"start": 318.68, "end": 323.96, "text": " You can build augmented reality experiences for Snap Chets hundreds of millions of users"}, {"start": 323.96, "end": 326.92, "text": " and help them see the world through a different lens."}, {"start": 326.92, "end": 330.92, "text": " You can also apply to Snap's AR Creator Residency program"}, {"start": 330.92, "end": 335.4, "text": " with a proposal of how you would use Lens Studio for a creative project."}, {"start": 335.4, "end": 342.36, "text": " If selected, you could receive a grant between 1 to $5,000 and work with Snap's technical"}, {"start": 342.36, "end": 345.4, "text": " and creative teams to bring your ideas to life."}, {"start": 345.4, "end": 347.64, "text": " It doesn't get any better than that."}, {"start": 347.64, "end": 350.2, "text": " Make sure to go to the link in the video description"}, {"start": 350.2, "end": 354.44, "text": " and apply for their residency program and try Snap ML today."}, {"start": 354.44, "end": 358.44, "text": " Our thanks to Snap Inc for helping us make better videos for you."}, {"start": 358.44, "end": 385.88, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=oHLR287rDRA
This is Geometry Processing Made Easy!
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "Monte Carlo Geometry Processing: A Grid-Free Approach to PDE-Based Methods on Volumetric Domains" is available here: https://www.cs.cmu.edu/~kmcrane/Projects/MonteCarloGeometryProcessing/index.html Implementations: - https://twitter.com/iquilezles/status/1258218688726962183 - https://twitter.com/iquilezles/status/1258237114958802944 - https://www.shadertoy.com/view/wdffWj Our mega video on Multiple Importance Sampling: https://www.youtube.com/watch?v=TbWQ4lMnLNw Koiava’s MIS implementation: https://www.shadertoy.com/view/4sSXWt My course at the Vienna University of Technology on light transport is available here. It is completely free for everyone: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajol Naifahir. I think this might be it. This paper is called Monte Carlo Geometry Processing and in my opinion this may be one of the best if not the best computer graphics paper of the year. There will be so many amazing results I first thought this one paper could be the topic of 10 2 Minute Papers videos but I will attempt to cram everything into this one video. It is quite challenging to explain so please bear with me I'll try my best. To even have a fighting chance at understanding what is going on here first we start out with one of the most beautiful topics in computer graphics which is none other than light transport. To create a beautiful light simulation we have to solve something that we call the rendering equation. For practical cases it is currently impossible to solve it like we usually solve any other equation. However, what we can do is that we can choose a random ray, simulate its path as it bounces around in the scene and compute how it interacts with the objects and materials within this scene. As we do it with more and more rays we get more information about what we see in the scene that's good but look it is noisy. As we add even more rays this noise slowly evens out and in the end we get a perfectly clean image. This takes place with the help of a technique that we call Monte Carlo integration which involves randomness. At the risk of oversimplifying the situation in essence this technique says that we cannot solve the problem but we can take samples from the problem and if we do it in a smart way eventually we will be able to solve it. In light transport the problem is the rendering equation which we cannot solve but we can take samples one sample is simulating one ray. If we have enough rays we have a solution. However, it hasn't always been like this. Before Monte Carlo integration light transport was done through a technique called radiosity. The key issue with radiosity was that the geometry of the scene had to be sliced up into many small pieces and the light scattering events had to be evaluated between these small pieces. It could not handle all light transport phenomena and the geometry processing part was a major headache and Monte Carlo integration was a revelation that breathed new life into this field. However there are still many geometry processing problems that include these headaches and hold on to your papers because this paper shows us that we can apply Monte Carlo integration to many of these problems too. For instance one it can resolve the rich external and internal structure of this end. With traditional techniques this would normally take more than 14 hours and 30 gigabytes of memory but if we apply Monte Carlo integration to this problem we can get a somewhat noisy preview of the result in less than one minute. Of course over time as we compute more samples the noise clears up and we get this beautiful final result. And the concept can be used for so much more it truly makes my head spin. Let's discuss six amazing applications while noting that there are so many more in the paper which you can and should check out in the video description. For instance two it can also compute a CT scan of the infamous shovel nose frog that you see here and instead of creating the full 3D solution we only have to compute a 2D slice of it which is much much cheaper. 3. It can also edit these curves and note that the key part is that we can do that without the major headache of creating an intermediate triangle mass geometry for it. 4. It also supports denoising techniques so we don't have to compute too many samples to get a clear image or piece of geometry. 5. Performing hamholes hodge decomposition with this method is also possible. This is a technique that is used widely in many domains for instance it is responsible to ensure the stability of many fluid simulation programs and this technique can also produce these decompositions. And interestingly here it is used to represent 3D objects without the usual triangle meshes that we use in computer graphics. 6. It supports multiple important sampling as well. This means that if we have multiple sampling strategies, multiple ways to solve a problem that have different advantages and disadvantages it combines them in a way that we get the best of all of them. We had a mega episode on multiple important sampling it has lots of amazing uses in light transport simulations so if you would like to hear more make sure to check that out in the video description. But wait these are all difficult problems. One surely needs a PhD and years of experience in computer graphics to implement this right? When seeing a work like this we often ask okay it does something great but how complex is it? How many days do I have to work to re-implement it? Please take a guess and let me know what the guess was in the comment section. And now what you see here is none other than the source code for the core of the method and what's even more a bunch of implementations of it already exist. And if you see that the paper has been re-implemented around day one you know it's good. So no wonder this paper has been accepted to SIGRF perhaps the most prestigious computer graphics conference. It is immensely difficult to get a paper accepted there and I would say this one more than earned it. Huge congratulations to the first author of the paper Rohan Saunee he is currently a PhD student and note that this was his second published paper. Unreal. Such a great leap for the field in just one paper. Also congratulations to Professor Keenam Krain who advised this project and many other wonderful works in the last few years. I cannot wait to see what they will be up to next and I hope that now you are just as excited. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Linode gives you full back and access to your server which is your step up to powerful, fast, fully configurable cloud computing. Linode also has one click apps that streamline your ability to deploy websites, personal VPNs, game servers and more. If you need something as small as a personal online portfolio, Linode has your back and if you need to manage tons of clients websites and reliably serve them to millions of visitors Linode can do that too. But more, they offer affordable GPU instances featuring the Quadro RTX 6000 which is tailor-made for AI, scientific computing and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers or click the link in the video description and give it a try today. Thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajol Naifahir."}, {"start": 4.48, "end": 6.88, "text": " I think this might be it."}, {"start": 6.88, "end": 12.52, "text": " This paper is called Monte Carlo Geometry Processing and in my opinion this may be one of the"}, {"start": 12.52, "end": 16.68, "text": " best if not the best computer graphics paper of the year."}, {"start": 16.68, "end": 22.0, "text": " There will be so many amazing results I first thought this one paper could be the topic"}, {"start": 22.0, "end": 27.88, "text": " of 10 2 Minute Papers videos but I will attempt to cram everything into this one video."}, {"start": 27.88, "end": 32.44, "text": " It is quite challenging to explain so please bear with me I'll try my best."}, {"start": 32.44, "end": 37.96, "text": " To even have a fighting chance at understanding what is going on here first we start out"}, {"start": 37.96, "end": 43.96, "text": " with one of the most beautiful topics in computer graphics which is none other than light transport."}, {"start": 43.96, "end": 48.72, "text": " To create a beautiful light simulation we have to solve something that we call the rendering"}, {"start": 48.72, "end": 49.8, "text": " equation."}, {"start": 49.8, "end": 55.08, "text": " For practical cases it is currently impossible to solve it like we usually solve any other"}, {"start": 55.08, "end": 56.08, "text": " equation."}, {"start": 56.08, "end": 62.199999999999996, "text": " However, what we can do is that we can choose a random ray, simulate its path as it bounces"}, {"start": 62.199999999999996, "end": 67.48, "text": " around in the scene and compute how it interacts with the objects and materials within this"}, {"start": 67.48, "end": 68.48, "text": " scene."}, {"start": 68.48, "end": 73.48, "text": " As we do it with more and more rays we get more information about what we see in the scene"}, {"start": 73.48, "end": 77.68, "text": " that's good but look it is noisy."}, {"start": 77.68, "end": 84.0, "text": " As we add even more rays this noise slowly evens out and in the end we get a perfectly"}, {"start": 84.0, "end": 85.24, "text": " clean image."}, {"start": 85.24, "end": 90.03999999999999, "text": " This takes place with the help of a technique that we call Monte Carlo integration which"}, {"start": 90.03999999999999, "end": 91.91999999999999, "text": " involves randomness."}, {"start": 91.91999999999999, "end": 97.03999999999999, "text": " At the risk of oversimplifying the situation in essence this technique says that we cannot"}, {"start": 97.03999999999999, "end": 102.64, "text": " solve the problem but we can take samples from the problem and if we do it in a smart"}, {"start": 102.64, "end": 106.08, "text": " way eventually we will be able to solve it."}, {"start": 106.08, "end": 111.64, "text": " In light transport the problem is the rendering equation which we cannot solve but we can"}, {"start": 111.64, "end": 115.72, "text": " take samples one sample is simulating one ray."}, {"start": 115.72, "end": 118.72, "text": " If we have enough rays we have a solution."}, {"start": 118.72, "end": 122.24, "text": " However, it hasn't always been like this."}, {"start": 122.24, "end": 127.84, "text": " Before Monte Carlo integration light transport was done through a technique called radiosity."}, {"start": 127.84, "end": 133.4, "text": " The key issue with radiosity was that the geometry of the scene had to be sliced up into many"}, {"start": 133.4, "end": 139.6, "text": " small pieces and the light scattering events had to be evaluated between these small pieces."}, {"start": 139.6, "end": 144.48, "text": " It could not handle all light transport phenomena and the geometry processing part was a major"}, {"start": 144.48, "end": 150.35999999999999, "text": " headache and Monte Carlo integration was a revelation that breathed new life into this"}, {"start": 150.35999999999999, "end": 151.35999999999999, "text": " field."}, {"start": 151.35999999999999, "end": 156.32, "text": " However there are still many geometry processing problems that include these headaches"}, {"start": 156.32, "end": 162.64, "text": " and hold on to your papers because this paper shows us that we can apply Monte Carlo integration"}, {"start": 162.64, "end": 164.88, "text": " to many of these problems too."}, {"start": 164.88, "end": 171.2, "text": " For instance one it can resolve the rich external and internal structure of this end."}, {"start": 171.2, "end": 176.2, "text": " With traditional techniques this would normally take more than 14 hours and 30 gigabytes"}, {"start": 176.2, "end": 182.35999999999999, "text": " of memory but if we apply Monte Carlo integration to this problem we can get a somewhat noisy"}, {"start": 182.35999999999999, "end": 185.96, "text": " preview of the result in less than one minute."}, {"start": 185.96, "end": 192.76, "text": " Of course over time as we compute more samples the noise clears up and we get this beautiful"}, {"start": 192.76, "end": 194.24, "text": " final result."}, {"start": 194.24, "end": 199.04000000000002, "text": " And the concept can be used for so much more it truly makes my head spin."}, {"start": 199.04000000000002, "end": 203.64000000000001, "text": " Let's discuss six amazing applications while noting that there are so many more in the"}, {"start": 203.64000000000001, "end": 208.48000000000002, "text": " paper which you can and should check out in the video description."}, {"start": 208.48000000000002, "end": 214.44, "text": " For instance two it can also compute a CT scan of the infamous shovel nose frog that you"}, {"start": 214.44, "end": 220.8, "text": " see here and instead of creating the full 3D solution we only have to compute a 2D slice"}, {"start": 220.8, "end": 224.52, "text": " of it which is much much cheaper."}, {"start": 224.52, "end": 225.52, "text": " 3."}, {"start": 225.52, "end": 230.56, "text": " It can also edit these curves and note that the key part is that we can do that without"}, {"start": 230.56, "end": 235.8, "text": " the major headache of creating an intermediate triangle mass geometry for it."}, {"start": 235.8, "end": 236.8, "text": " 4."}, {"start": 236.8, "end": 241.52, "text": " It also supports denoising techniques so we don't have to compute too many samples to get"}, {"start": 241.52, "end": 244.52, "text": " a clear image or piece of geometry."}, {"start": 244.52, "end": 245.52, "text": " 5."}, {"start": 245.52, "end": 250.52, "text": " Performing hamholes hodge decomposition with this method is also possible."}, {"start": 250.52, "end": 255.36, "text": " This is a technique that is used widely in many domains for instance it is responsible"}, {"start": 255.36, "end": 261.72, "text": " to ensure the stability of many fluid simulation programs and this technique can also produce"}, {"start": 261.72, "end": 263.64, "text": " these decompositions."}, {"start": 263.64, "end": 269.76, "text": " And interestingly here it is used to represent 3D objects without the usual triangle meshes"}, {"start": 269.76, "end": 272.2, "text": " that we use in computer graphics."}, {"start": 272.2, "end": 273.2, "text": " 6."}, {"start": 273.2, "end": 276.32, "text": " It supports multiple important sampling as well."}, {"start": 276.32, "end": 281.24, "text": " This means that if we have multiple sampling strategies, multiple ways to solve a problem"}, {"start": 281.24, "end": 286.32, "text": " that have different advantages and disadvantages it combines them in a way that we get the"}, {"start": 286.32, "end": 288.12, "text": " best of all of them."}, {"start": 288.12, "end": 293.8, "text": " We had a mega episode on multiple important sampling it has lots of amazing uses in light"}, {"start": 293.8, "end": 298.4, "text": " transport simulations so if you would like to hear more make sure to check that out"}, {"start": 298.4, "end": 299.4, "text": " in the video description."}, {"start": 299.4, "end": 303.15999999999997, "text": " But wait these are all difficult problems."}, {"start": 303.16, "end": 309.36, "text": " One surely needs a PhD and years of experience in computer graphics to implement this right?"}, {"start": 309.36, "end": 315.12, "text": " When seeing a work like this we often ask okay it does something great but how complex"}, {"start": 315.12, "end": 316.12, "text": " is it?"}, {"start": 316.12, "end": 318.96000000000004, "text": " How many days do I have to work to re-implement it?"}, {"start": 318.96000000000004, "end": 323.56, "text": " Please take a guess and let me know what the guess was in the comment section."}, {"start": 323.56, "end": 330.20000000000005, "text": " And now what you see here is none other than the source code for the core of the method"}, {"start": 330.2, "end": 334.64, "text": " and what's even more a bunch of implementations of it already exist."}, {"start": 334.64, "end": 339.84, "text": " And if you see that the paper has been re-implemented around day one you know it's good."}, {"start": 339.84, "end": 345.28, "text": " So no wonder this paper has been accepted to SIGRF perhaps the most prestigious computer"}, {"start": 345.28, "end": 346.68, "text": " graphics conference."}, {"start": 346.68, "end": 351.84, "text": " It is immensely difficult to get a paper accepted there and I would say this one more than"}, {"start": 351.84, "end": 352.84, "text": " earned it."}, {"start": 352.84, "end": 357.88, "text": " Huge congratulations to the first author of the paper Rohan Saunee he is currently a PhD"}, {"start": 357.88, "end": 362.32, "text": " student and note that this was his second published paper."}, {"start": 362.32, "end": 363.32, "text": " Unreal."}, {"start": 363.32, "end": 366.96, "text": " Such a great leap for the field in just one paper."}, {"start": 366.96, "end": 371.92, "text": " Also congratulations to Professor Keenam Krain who advised this project and many other"}, {"start": 371.92, "end": 374.36, "text": " wonderful works in the last few years."}, {"start": 374.36, "end": 379.32, "text": " I cannot wait to see what they will be up to next and I hope that now you are just as"}, {"start": 379.32, "end": 380.48, "text": " excited."}, {"start": 380.48, "end": 382.88, "text": " This episode has been supported by Linode."}, {"start": 382.88, "end": 386.6, "text": " Linode is the world's largest independent cloud computing provider."}, {"start": 386.6, "end": 391.68, "text": " Linode gives you full back and access to your server which is your step up to powerful,"}, {"start": 391.68, "end": 394.36, "text": " fast, fully configurable cloud computing."}, {"start": 394.36, "end": 399.36, "text": " Linode also has one click apps that streamline your ability to deploy websites, personal"}, {"start": 399.36, "end": 402.32000000000005, "text": " VPNs, game servers and more."}, {"start": 402.32000000000005, "end": 407.36, "text": " If you need something as small as a personal online portfolio, Linode has your back and"}, {"start": 407.36, "end": 412.6, "text": " if you need to manage tons of clients websites and reliably serve them to millions of visitors"}, {"start": 412.6, "end": 414.44, "text": " Linode can do that too."}, {"start": 414.44, "end": 421.4, "text": " But more, they offer affordable GPU instances featuring the Quadro RTX 6000 which is tailor-made"}, {"start": 421.4, "end": 426.12, "text": " for AI, scientific computing and computer graphics projects."}, {"start": 426.12, "end": 431.08, "text": " If only I had access to a tool like this while I was working on my last few papers."}, {"start": 431.08, "end": 437.6, "text": " To receive $20 in credit on your new Linode account, visit linode.com slash papers or click"}, {"start": 437.6, "end": 441.2, "text": " the link in the video description and give it a try today."}, {"start": 441.2, "end": 446.03999999999996, "text": " Thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 446.04, "end": 472.8, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=QSVrKK_uHoU
Amazing AR Effects Are Coming!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their mentioned post is available here: https://app.wandb.ai/latentspace/published-work/The-Science-of-Debugging-with-W%26B-Reports--Vmlldzo4OTI3Ng 📝 The paper "Consistent Video Depth Estimation" is available here: https://roxanneluo.github.io/Consistent-Video-Depth-Estimation/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajjona Ifehir. When we, humans, look at an image or a piece of video footage, we understand the geometry of the objects in there so well that if we had the time and patience, we could draw a depth map that describes the distance of each object from the camera. This goes without saying. However, what does not go without saying is that if we could teach computers to do the same, we could do incredible things. For instance, this learning-based technique creates real-time defocus effects for virtual reality and computer games, and this one performs this Can Burns effect in 3D or in other words, zoom and pan around in a photograph, but with a beautiful twist because in the meantime, it also reveals the depth of the image. With this data, we can even try to teach self-driving cars about depth perception to enhance their ability to navigate around safely. However, if you look here, you see two key problems. One, it is a little blurry and there are lots of fine details that it couldn't resolve and it is flickering. In other words, there are a drop changes from one image to the next one which shouldn't be there as the objects in the video feed are moving smoothly. Smooth motion should mean smooth depth maps and it is getting there, but it is still not the case here. So, I wonder if we could teach a machine to perform this task better. And more importantly, what new wondrous things can we do if we pull this off? This new technique is called consistent video depth estimation and it promises smooth and detailed depth maps that are of much higher quality than what previous works offer. And now, hold onto your papers because finally, these maps contain enough detail to open up the possibility of adding new objects to the scene or even flood the room with water or add many other really cool video effects. All of these will take the geometry of the existing real world objects, for instance cats into consideration. Very cool. The reason why we need such a consistent technique to pull this off is because if we have this flickering in time that we've seen here, then the depth of different objects suddenly bounces around over time even for a stationary object. This means that if one frame the ball would be in front of the person when in the next one it would suddenly think that it has to put the ball behind them and then in the next one front again creating a not only drawing but quite unconvincing animation. What is really remarkable is that due to the consistency of the technique, none of that happens here. Love it. Here are some more results where you can see that the outlines of the objects in the depth map are really crisp and follow the changes really well over time. The snowing example here is one of my favorites and it is really convincing. However, there are still a few spots where we can find some visual artifacts. For instance, as the subject is waving there is lots of fine high frequency data around the fingers there and if you look at the region behind the head closely you find some more issues or you can find that some balls are flickering on the table as we move the camera around. Compare that to previous methods that could not do nearly as good as this and now we have something that is quite satisfactory. I can only imagine how good this will get two more papers down the line and for the meantime we'll be able to run these amazing effects even without having a real depth camera. What a time to be alive. This episode has been supported by weights and biases. In this post, latent space shows you how reports from weights and biases are central to managing their machine learning workflow and how to use their reports to quickly identify and debug issues with their deep learning models. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers or click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.74, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajjona Ifehir."}, {"start": 4.74, "end": 9.74, "text": " When we, humans, look at an image or a piece of video footage, we understand the geometry"}, {"start": 9.74, "end": 14.88, "text": " of the objects in there so well that if we had the time and patience, we could draw a"}, {"start": 14.88, "end": 20.1, "text": " depth map that describes the distance of each object from the camera."}, {"start": 20.1, "end": 21.78, "text": " This goes without saying."}, {"start": 21.78, "end": 26.54, "text": " However, what does not go without saying is that if we could teach computers to do the"}, {"start": 26.54, "end": 29.66, "text": " same, we could do incredible things."}, {"start": 29.66, "end": 34.6, "text": " For instance, this learning-based technique creates real-time defocus effects for virtual"}, {"start": 34.6, "end": 41.480000000000004, "text": " reality and computer games, and this one performs this Can Burns effect in 3D or in other"}, {"start": 41.480000000000004, "end": 47.24, "text": " words, zoom and pan around in a photograph, but with a beautiful twist because in the"}, {"start": 47.24, "end": 50.96, "text": " meantime, it also reveals the depth of the image."}, {"start": 50.96, "end": 56.68, "text": " With this data, we can even try to teach self-driving cars about depth perception to enhance"}, {"start": 56.68, "end": 59.56, "text": " their ability to navigate around safely."}, {"start": 59.56, "end": 63.76, "text": " However, if you look here, you see two key problems."}, {"start": 63.76, "end": 68.64, "text": " One, it is a little blurry and there are lots of fine details that it couldn't resolve"}, {"start": 68.64, "end": 70.88, "text": " and it is flickering."}, {"start": 70.88, "end": 75.4, "text": " In other words, there are a drop changes from one image to the next one which shouldn't"}, {"start": 75.4, "end": 80.32, "text": " be there as the objects in the video feed are moving smoothly."}, {"start": 80.32, "end": 85.96000000000001, "text": " Smooth motion should mean smooth depth maps and it is getting there, but it is still not"}, {"start": 85.96, "end": 86.96, "text": " the case here."}, {"start": 86.96, "end": 92.28, "text": " So, I wonder if we could teach a machine to perform this task better."}, {"start": 92.28, "end": 97.6, "text": " And more importantly, what new wondrous things can we do if we pull this off?"}, {"start": 97.6, "end": 103.11999999999999, "text": " This new technique is called consistent video depth estimation and it promises smooth and"}, {"start": 103.11999999999999, "end": 109.52, "text": " detailed depth maps that are of much higher quality than what previous works offer."}, {"start": 109.52, "end": 115.24, "text": " And now, hold onto your papers because finally, these maps contain enough detail to open up"}, {"start": 115.24, "end": 121.6, "text": " the possibility of adding new objects to the scene or even flood the room with water"}, {"start": 121.6, "end": 125.8, "text": " or add many other really cool video effects."}, {"start": 125.8, "end": 131.16, "text": " All of these will take the geometry of the existing real world objects, for instance cats"}, {"start": 131.16, "end": 132.84, "text": " into consideration."}, {"start": 132.84, "end": 135.16, "text": " Very cool."}, {"start": 135.16, "end": 139.88, "text": " The reason why we need such a consistent technique to pull this off is because if we have this"}, {"start": 139.88, "end": 146.32, "text": " flickering in time that we've seen here, then the depth of different objects suddenly"}, {"start": 146.32, "end": 150.96, "text": " bounces around over time even for a stationary object."}, {"start": 150.96, "end": 155.84, "text": " This means that if one frame the ball would be in front of the person when in the next"}, {"start": 155.84, "end": 161.04, "text": " one it would suddenly think that it has to put the ball behind them and then in the next"}, {"start": 161.04, "end": 167.44, "text": " one front again creating a not only drawing but quite unconvincing animation."}, {"start": 167.44, "end": 171.64, "text": " What is really remarkable is that due to the consistency of the technique, none of that"}, {"start": 171.64, "end": 172.64, "text": " happens here."}, {"start": 172.64, "end": 173.64, "text": " Love it."}, {"start": 173.64, "end": 178.16, "text": " Here are some more results where you can see that the outlines of the objects in the depth"}, {"start": 178.16, "end": 185.4, "text": " map are really crisp and follow the changes really well over time."}, {"start": 185.4, "end": 190.52, "text": " The snowing example here is one of my favorites and it is really convincing."}, {"start": 190.52, "end": 195.96, "text": " However, there are still a few spots where we can find some visual artifacts."}, {"start": 195.96, "end": 201.24, "text": " For instance, as the subject is waving there is lots of fine high frequency data around"}, {"start": 201.24, "end": 206.08, "text": " the fingers there and if you look at the region behind the head closely you find some more"}, {"start": 206.08, "end": 212.28, "text": " issues or you can find that some balls are flickering on the table as we move the camera"}, {"start": 212.28, "end": 213.96, "text": " around."}, {"start": 213.96, "end": 219.56, "text": " Compare that to previous methods that could not do nearly as good as this and now we have"}, {"start": 219.56, "end": 222.08, "text": " something that is quite satisfactory."}, {"start": 222.08, "end": 227.96, "text": " I can only imagine how good this will get two more papers down the line and for the meantime"}, {"start": 227.96, "end": 233.76000000000002, "text": " we'll be able to run these amazing effects even without having a real depth camera."}, {"start": 233.76000000000002, "end": 235.60000000000002, "text": " What a time to be alive."}, {"start": 235.60000000000002, "end": 238.8, "text": " This episode has been supported by weights and biases."}, {"start": 238.8, "end": 244.52, "text": " In this post, latent space shows you how reports from weights and biases are central to"}, {"start": 244.52, "end": 250.12, "text": " managing their machine learning workflow and how to use their reports to quickly identify"}, {"start": 250.12, "end": 253.56, "text": " and debug issues with their deep learning models."}, {"start": 253.56, "end": 258.16, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 258.16, "end": 262.96, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 262.96, "end": 269.68, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 269.68, "end": 274.64, "text": " And the best part is that if you have an open source, academic or personal project you"}, {"start": 274.64, "end": 276.76, "text": " can use their tools for free."}, {"start": 276.76, "end": 279.28000000000003, "text": " It really is as good as it gets."}, {"start": 279.28, "end": 285.4, "text": " Make sure to visit them through wnbe.com slash papers or click the link in the video description"}, {"start": 285.4, "end": 288.47999999999996, "text": " to start tracking your experiments in five minutes."}, {"start": 288.47999999999996, "end": 293.08, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 293.08, "end": 294.35999999999996, "text": " better videos for you."}, {"start": 294.36, "end": 323.92, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=N6wn8zMRlVE
How Do Neural Networks Learn? 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their instrumentation of a previous work we covered is available here: https://app.wandb.ai/stacey/aprl/reports/Adversarial-Policies-in-Multi-Agent-Settings--VmlldzoxMDEyNzE 📝 The paper "CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization" is available here: https://github.com/poloclub/cnn-explainer Live web demo: https://poloclub.github.io/cnn-explainer/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifehir. We have recently explored a few neural network-based learning algorithms that could perform material editing, physics simulations, and more. As some of these networks have hundreds of layers and often thousands of neurons within these layers, they are almost unfathomably complex. At this point, it makes sense to ask, can we understand what is going on inside these networks? Do we even have a fighting chance? Luckily, today, visualizing the inner workings of neural networks is a research subfield of its own, and the answer is, yes, we learn more and more every year. But there is also plenty more to learn. Earlier, we talked about a technique that we called activation maximization, which was about trying to find an input that makes a given neuron as excited as possible. This gives us some cues as to what the neural network is looking for in an image. A later work that proposes visualizing spatial activations gives us more information about these interactions between two or even more neurons. You see here with the dots that it provides us a dense sampling of the most likely activations and this leads to a more complete, bigger picture view of the inner workings of the neural network. This is what it looks like if we run it on one image. It also provides us with way more extra value because so far, we have only seen how the neural network reacts to one image, but this method can be extended to see its reaction to not one, but one million images. You can see an example of that here. Later, it was also revealed that some of these image detector networks can assemble something that we call a pose invariant dog head detector. What this means is that it can detect a dog head in many different orientations and look. You see that it gets very excited by all of these good boys plus this squirrel. Today's technique offers us an excellent tool to look into the inner workings of a convolution on your own network that is very capable of image-related operations, for instance image classification. The task here is that we have an input image of a mug or a red panda and the output should be a decision from the network that yes, what we are seeing is indeed a mug or a panda or not. They apply something that we call a convolutional filter over an image which tries to find interesting patterns that differentiate objects from each other. You can see how the outputs are related to the input image here. As you see, the neurons in the next layer will be assembled as a combination of the neurons from the previous layer. When we use the term deep learning, we typically refer to neural networks that have two or more of these inner layers. Each subsequent layer is built by taking all the neurons in the previous layer which select for the features relevant to what the next neuron represents, for instance the handle of the mug and inhibits everything else. To make this a little clearer, these previous work tried to detect whether we have a car in an image by using these neurons. Here, the upper part looks like a car window, the next one resembles a car body and the bottom of the third neuron clearly contains a wheel detector. This is the information that the neurons in the next layer are looking for. In the end, we make a final decision as to whether this is a panda or a mug by adding up all the intermediate results. The blue or this part is the more relevant this neuron is in the final decision. Here the neural network concludes that this doesn't look like a liveboat or a ladybug at all, but it looks like pizza. If we look at the other sums, we see that the school bus and orange are not hopeless candidates, but still the neural network does not have much doubt whether this is a pizza or not. And the best part is that you can even try it yourself in your browser if you click the link in the video description, run these simulations and even upload your own image. Make sure that you upload or link something that belongs to one of these classes on the right to make this visualization work. So clearly, there is plenty more work for us to do to properly understand what is going on under the hood of neural networks, but I hope this quick rundown showcased how many facets there are to this neural network visualization subfield and how exciting it is. Make sure to post your experience in the comments section whether the classification worked well for you or not. And if you wish to see more videos like this, make sure to subscribe and hit the bell icon to not miss future videos. What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnbe.com slash papers or just click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifehir."}, {"start": 4.6000000000000005, "end": 10.0, "text": " We have recently explored a few neural network-based learning algorithms that could perform material"}, {"start": 10.0, "end": 13.52, "text": " editing, physics simulations, and more."}, {"start": 13.52, "end": 18.04, "text": " As some of these networks have hundreds of layers and often thousands of neurons within"}, {"start": 18.04, "end": 22.16, "text": " these layers, they are almost unfathomably complex."}, {"start": 22.16, "end": 27.32, "text": " At this point, it makes sense to ask, can we understand what is going on inside these"}, {"start": 27.32, "end": 28.32, "text": " networks?"}, {"start": 28.32, "end": 31.52, "text": " Do we even have a fighting chance?"}, {"start": 31.52, "end": 36.76, "text": " Luckily, today, visualizing the inner workings of neural networks is a research subfield"}, {"start": 36.76, "end": 41.76, "text": " of its own, and the answer is, yes, we learn more and more every year."}, {"start": 41.76, "end": 44.32, "text": " But there is also plenty more to learn."}, {"start": 44.32, "end": 48.92, "text": " Earlier, we talked about a technique that we called activation maximization, which was"}, {"start": 48.92, "end": 54.92, "text": " about trying to find an input that makes a given neuron as excited as possible."}, {"start": 54.92, "end": 59.52, "text": " This gives us some cues as to what the neural network is looking for in an image."}, {"start": 59.52, "end": 66.24000000000001, "text": " A later work that proposes visualizing spatial activations gives us more information about"}, {"start": 66.24000000000001, "end": 71.2, "text": " these interactions between two or even more neurons."}, {"start": 71.2, "end": 76.4, "text": " You see here with the dots that it provides us a dense sampling of the most likely activations"}, {"start": 76.4, "end": 81.72, "text": " and this leads to a more complete, bigger picture view of the inner workings of the neural"}, {"start": 81.72, "end": 83.72, "text": " network."}, {"start": 83.72, "end": 87.03999999999999, "text": " This is what it looks like if we run it on one image."}, {"start": 87.03999999999999, "end": 92.32, "text": " It also provides us with way more extra value because so far, we have only seen how the"}, {"start": 92.32, "end": 98.32, "text": " neural network reacts to one image, but this method can be extended to see its reaction"}, {"start": 98.32, "end": 101.88, "text": " to not one, but one million images."}, {"start": 101.88, "end": 104.72, "text": " You can see an example of that here."}, {"start": 104.72, "end": 110.6, "text": " Later, it was also revealed that some of these image detector networks can assemble something"}, {"start": 110.6, "end": 114.32, "text": " that we call a pose invariant dog head detector."}, {"start": 114.32, "end": 120.39999999999999, "text": " What this means is that it can detect a dog head in many different orientations and look."}, {"start": 120.39999999999999, "end": 126.19999999999999, "text": " You see that it gets very excited by all of these good boys plus this squirrel."}, {"start": 126.19999999999999, "end": 131.56, "text": " Today's technique offers us an excellent tool to look into the inner workings of a convolution"}, {"start": 131.56, "end": 136.88, "text": " on your own network that is very capable of image-related operations, for instance image"}, {"start": 136.88, "end": 137.88, "text": " classification."}, {"start": 137.88, "end": 144.2, "text": " The task here is that we have an input image of a mug or a red panda and the output should"}, {"start": 144.2, "end": 149.68, "text": " be a decision from the network that yes, what we are seeing is indeed a mug or a panda"}, {"start": 149.68, "end": 152.07999999999998, "text": " or not."}, {"start": 152.07999999999998, "end": 157.88, "text": " They apply something that we call a convolutional filter over an image which tries to find interesting"}, {"start": 157.88, "end": 161.16, "text": " patterns that differentiate objects from each other."}, {"start": 161.16, "end": 165.4, "text": " You can see how the outputs are related to the input image here."}, {"start": 165.4, "end": 170.32, "text": " As you see, the neurons in the next layer will be assembled as a combination of the neurons"}, {"start": 170.32, "end": 172.16, "text": " from the previous layer."}, {"start": 172.16, "end": 177.6, "text": " When we use the term deep learning, we typically refer to neural networks that have two or more"}, {"start": 177.6, "end": 179.48000000000002, "text": " of these inner layers."}, {"start": 179.48000000000002, "end": 184.72, "text": " Each subsequent layer is built by taking all the neurons in the previous layer which select"}, {"start": 184.72, "end": 189.4, "text": " for the features relevant to what the next neuron represents, for instance the handle"}, {"start": 189.4, "end": 193.92000000000002, "text": " of the mug and inhibits everything else."}, {"start": 193.92, "end": 198.95999999999998, "text": " To make this a little clearer, these previous work tried to detect whether we have a car"}, {"start": 198.95999999999998, "end": 201.83999999999997, "text": " in an image by using these neurons."}, {"start": 201.83999999999997, "end": 208.88, "text": " Here, the upper part looks like a car window, the next one resembles a car body and the"}, {"start": 208.88, "end": 213.11999999999998, "text": " bottom of the third neuron clearly contains a wheel detector."}, {"start": 213.11999999999998, "end": 217.83999999999997, "text": " This is the information that the neurons in the next layer are looking for."}, {"start": 217.83999999999997, "end": 223.16, "text": " In the end, we make a final decision as to whether this is a panda or a mug by adding"}, {"start": 223.16, "end": 225.56, "text": " up all the intermediate results."}, {"start": 225.56, "end": 230.68, "text": " The blue or this part is the more relevant this neuron is in the final decision."}, {"start": 230.68, "end": 235.07999999999998, "text": " Here the neural network concludes that this doesn't look like a liveboat or a ladybug"}, {"start": 235.07999999999998, "end": 238.16, "text": " at all, but it looks like pizza."}, {"start": 238.16, "end": 243.72, "text": " If we look at the other sums, we see that the school bus and orange are not hopeless candidates,"}, {"start": 243.72, "end": 249.2, "text": " but still the neural network does not have much doubt whether this is a pizza or not."}, {"start": 249.2, "end": 253.48, "text": " And the best part is that you can even try it yourself in your browser if you click"}, {"start": 253.48, "end": 259.52, "text": " the link in the video description, run these simulations and even upload your own image."}, {"start": 259.52, "end": 263.52, "text": " Make sure that you upload or link something that belongs to one of these classes on the"}, {"start": 263.52, "end": 266.12, "text": " right to make this visualization work."}, {"start": 266.12, "end": 270.76, "text": " So clearly, there is plenty more work for us to do to properly understand what is going"}, {"start": 270.76, "end": 276.64, "text": " on under the hood of neural networks, but I hope this quick rundown showcased how many"}, {"start": 276.64, "end": 282.68, "text": " facets there are to this neural network visualization subfield and how exciting it is."}, {"start": 282.68, "end": 286.84, "text": " Make sure to post your experience in the comments section whether the classification worked"}, {"start": 286.84, "end": 288.96, "text": " well for you or not."}, {"start": 288.96, "end": 293.0, "text": " And if you wish to see more videos like this, make sure to subscribe and hit the bell"}, {"start": 293.0, "end": 295.64, "text": " icon to not miss future videos."}, {"start": 295.64, "end": 300.12, "text": " What you see here is an instrumentation for a previous paper that we covered in this"}, {"start": 300.12, "end": 303.4, "text": " series which was made by weights and biases."}, {"start": 303.4, "end": 308.76, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 308.76, "end": 313.4, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 313.4, "end": 318.15999999999997, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 318.15999999999997, "end": 324.52, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 324.52, "end": 329.35999999999996, "text": " And the best part is that if you have an open source, academic or personal project,"}, {"start": 329.35999999999996, "end": 331.4, "text": " you can use their tools for free."}, {"start": 331.4, "end": 333.96, "text": " It really is as good as it gets."}, {"start": 333.96, "end": 339.52, "text": " Make sure to visit them through wnbe.com slash papers or just click the link in the video"}, {"start": 339.52, "end": 343.52, "text": " description to start tracking your experiments in 5 minutes."}, {"start": 343.52, "end": 348.2, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 348.2, "end": 349.44, "text": " better videos for you."}, {"start": 349.44, "end": 376.88, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=BQQxNa6U6X4
An AI Made All of These Faces! 🕵️‍♀️
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their instrumentation of this paper is available here: https://app.wandb.ai/authors/alae/reports/Adversarial-Latent-Autoencoders--VmlldzoxNDA2MDY You can even play with their notebook below! https://colab.research.google.com/drive/1XWlXN7Oi_5UWqUXjX66z4TD859dwo3UN?usp=sharing 📝 The paper "Adversarial Latent Autoencoders" is available here: https://github.com/podgorskiy/ALAE 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajol Naifahir. Today, I am going to try to tell you the glorious tale of AI-based human-faced generation and showcase an absolutely unbelievable new paper in this area. Early in this series, we covered a stunning paper that showcased a system that could not only classify an image, but write a proper sentence on what is going on and could cover even highly non-trivial cases. You may be surprised, but this thing is not recent at all. This is four-year-old news. Insanity. Later, researchers turned this whole problem around and performed something that was previously thought to be impossible. They started using these networks to generate photorealistic images from a written text description. We could create new bird species by specifying that it should have orange legs and a short yellow bill. Then, researchers at Nvidia recognized and addressed two shortcomings. One was that the images were not that detailed and two, even though we could input text, we couldn't exert too much artistic control over the results. In came Styrgan to the rescue, which was then able to perform both of these difficult tasks really well. Furthermore, there are some features that are highly localized as we exert control over these images. You can see how this part of the teeth and eyes were pinned to a particular location and the algorithm just refuses to let it go, sometimes to the detriment of its surroundings. A follow-up work titled Styrgan 2 addresses all of these problems in one go. So, Styrgan 2 was able to perform near perfect synthesis of human faces and remember, none of these people that you see here really exist. Quite remarkable. So, how can we improve this magnificent technique? Well, this nowhere can do so many things I don't even know where to start. First and most important, we now have much more intuitive artistic control over the output images. We can add or remove a beard, make the subject younger or older, change their hairstyle, make their hairline recede, put a smile on their face, or even make their nose point here. Absolute witchcraft. So, why can we do all this with this new method? The key idea is that it is not using a generative adversarial network again in short. Again means two competing neural networks where one is trained to generate new images and the other one is used to tell whether the generated images are real or fake. Gans dominated this field for a long while because of their powerful generation capabilities, but on the other hand, they are quite difficult to train and we only have limited control over its output. Among other changes, this work disassembles the generator network into F and G and the discriminator network into E and D or in other words adds an encoder and the coder network here. Why? The key idea is that the encoder compresses the image data down into a representation that we can edit more easily. This is the land of beards and smiles or in other words, all of these intuitive features that we can edit exist here and when we are done, we can decompress the output with the decoder network and produce these beautiful images. This is already incredible. But what else can we do with this new architecture? A lot more. For instance, too, if we add a source and destination subjects, their course, middle or fine styles can also be mixed. What does that mean exactly? The course part means that high-level attributes like pose, hairstyle and face shape will resemble the source subject. In other words, the child will remain a child and inherit some of the properties of the destination people. However, as we transition to the fine from source part, the effect of the destination subject will be stronger and the source will only be used to change the color scheme and microstructure of this image. Interestingly, it also changes the background of the subject. Three, it can also perform image interpolation. This means that we have these four images as starting points and it can compute intermediate images between them. You see here that as we slowly become bill gates, somewhere along the way, glasses appear. Now note that interpolating between images is not difficult in the slightest and has been possible for a long, long time. All we need to do is just compute average results between these images. So, what makes a good interpolation process? Well, we are talking about good interpolation when each of the intermediate images make sense and can stand on their own. I think this technique does amazingly well at that. I'll stop the process at different places you can see for yourself and let me know in the comments if you agree or not. I also kindly thank the authors for creating more footage just for us to showcase in this series. That is a huge honor. Thank you so much. A note that Staggan 2 appeared around December of 2019 and now this paper by the name Adversarial Layton Autoencorders appeared only four months later. Four months later. My goodness, this is so much progress in so little time it truly makes my head spin. What a time to be alive. But you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.nb.com slash papers or just click the link in the video description to start tracking your experiments in five minutes. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajol Naifahir."}, {"start": 4.32, "end": 10.48, "text": " Today, I am going to try to tell you the glorious tale of AI-based human-faced generation"}, {"start": 10.48, "end": 15.36, "text": " and showcase an absolutely unbelievable new paper in this area."}, {"start": 15.36, "end": 20.48, "text": " Early in this series, we covered a stunning paper that showcased a system that could not"}, {"start": 20.48, "end": 26.64, "text": " only classify an image, but write a proper sentence on what is going on and could cover"}, {"start": 26.64, "end": 32.72, "text": " even highly non-trivial cases. You may be surprised, but this thing is not recent at all."}, {"start": 32.72, "end": 39.6, "text": " This is four-year-old news. Insanity. Later, researchers turned this whole problem around"}, {"start": 39.6, "end": 44.88, "text": " and performed something that was previously thought to be impossible. They started using"}, {"start": 44.88, "end": 50.16, "text": " these networks to generate photorealistic images from a written text description."}, {"start": 50.16, "end": 56.08, "text": " We could create new bird species by specifying that it should have orange legs and a short yellow"}, {"start": 56.08, "end": 62.64, "text": " bill. Then, researchers at Nvidia recognized and addressed two shortcomings. One was that the"}, {"start": 62.64, "end": 68.4, "text": " images were not that detailed and two, even though we could input text, we couldn't exert"}, {"start": 68.4, "end": 74.48, "text": " too much artistic control over the results. In came Styrgan to the rescue, which was then"}, {"start": 74.48, "end": 80.56, "text": " able to perform both of these difficult tasks really well. Furthermore, there are some features"}, {"start": 80.56, "end": 85.75999999999999, "text": " that are highly localized as we exert control over these images. You can see how this"}, {"start": 85.76, "end": 92.0, "text": " part of the teeth and eyes were pinned to a particular location and the algorithm just refuses"}, {"start": 92.0, "end": 98.56, "text": " to let it go, sometimes to the detriment of its surroundings. A follow-up work titled Styrgan"}, {"start": 98.56, "end": 106.56, "text": " 2 addresses all of these problems in one go. So, Styrgan 2 was able to perform near perfect synthesis"}, {"start": 106.56, "end": 113.84, "text": " of human faces and remember, none of these people that you see here really exist. Quite remarkable."}, {"start": 113.84, "end": 120.64, "text": " So, how can we improve this magnificent technique? Well, this nowhere can do so many things I don't"}, {"start": 120.64, "end": 127.44, "text": " even know where to start. First and most important, we now have much more intuitive artistic control"}, {"start": 127.44, "end": 133.6, "text": " over the output images. We can add or remove a beard, make the subject younger or older,"}, {"start": 134.32, "end": 140.88, "text": " change their hairstyle, make their hairline recede, put a smile on their face, or even make their"}, {"start": 140.88, "end": 147.68, "text": " nose point here. Absolute witchcraft. So, why can we do all this with this new method?"}, {"start": 147.68, "end": 153.6, "text": " The key idea is that it is not using a generative adversarial network again in short."}, {"start": 154.16, "end": 160.0, "text": " Again means two competing neural networks where one is trained to generate new images and the"}, {"start": 160.0, "end": 166.72, "text": " other one is used to tell whether the generated images are real or fake. Gans dominated this field"}, {"start": 166.72, "end": 172.72, "text": " for a long while because of their powerful generation capabilities, but on the other hand,"}, {"start": 172.72, "end": 178.4, "text": " they are quite difficult to train and we only have limited control over its output."}, {"start": 178.4, "end": 185.44, "text": " Among other changes, this work disassembles the generator network into F and G and the discriminator"}, {"start": 185.44, "end": 193.36, "text": " network into E and D or in other words adds an encoder and the coder network here. Why?"}, {"start": 193.36, "end": 199.68, "text": " The key idea is that the encoder compresses the image data down into a representation that we"}, {"start": 199.68, "end": 206.8, "text": " can edit more easily. This is the land of beards and smiles or in other words, all of these intuitive"}, {"start": 206.8, "end": 212.56, "text": " features that we can edit exist here and when we are done, we can decompress the output with the"}, {"start": 212.56, "end": 220.16000000000003, "text": " decoder network and produce these beautiful images. This is already incredible. But what else can we do"}, {"start": 220.16, "end": 227.28, "text": " with this new architecture? A lot more. For instance, too, if we add a source and destination subjects,"}, {"start": 227.28, "end": 233.76, "text": " their course, middle or fine styles can also be mixed. What does that mean exactly?"}, {"start": 233.76, "end": 240.07999999999998, "text": " The course part means that high-level attributes like pose, hairstyle and face shape will resemble"}, {"start": 240.07999999999998, "end": 246.64, "text": " the source subject. In other words, the child will remain a child and inherit some of the properties"}, {"start": 246.64, "end": 253.67999999999998, "text": " of the destination people. However, as we transition to the fine from source part, the effect of the"}, {"start": 253.67999999999998, "end": 259.52, "text": " destination subject will be stronger and the source will only be used to change the color scheme"}, {"start": 259.52, "end": 265.76, "text": " and microstructure of this image. Interestingly, it also changes the background of the subject."}, {"start": 267.68, "end": 273.36, "text": " Three, it can also perform image interpolation. This means that we have these four images as"}, {"start": 273.36, "end": 279.6, "text": " starting points and it can compute intermediate images between them. You see here that as we slowly"}, {"start": 279.6, "end": 286.32, "text": " become bill gates, somewhere along the way, glasses appear. Now note that interpolating between"}, {"start": 286.32, "end": 292.32, "text": " images is not difficult in the slightest and has been possible for a long, long time. All we need to"}, {"start": 292.32, "end": 299.12, "text": " do is just compute average results between these images. So, what makes a good interpolation process?"}, {"start": 299.12, "end": 305.44, "text": " Well, we are talking about good interpolation when each of the intermediate images make sense"}, {"start": 305.44, "end": 312.0, "text": " and can stand on their own. I think this technique does amazingly well at that. I'll stop the process"}, {"start": 312.0, "end": 317.44, "text": " at different places you can see for yourself and let me know in the comments if you agree or not."}, {"start": 318.0, "end": 324.0, "text": " I also kindly thank the authors for creating more footage just for us to showcase in this series."}, {"start": 324.0, "end": 331.68, "text": " That is a huge honor. Thank you so much. A note that Staggan 2 appeared around December of 2019"}, {"start": 331.68, "end": 338.64, "text": " and now this paper by the name Adversarial Layton Autoencorders appeared only four months later."}, {"start": 339.28, "end": 346.72, "text": " Four months later. My goodness, this is so much progress in so little time it truly makes my"}, {"start": 346.72, "end": 353.44, "text": " head spin. What a time to be alive. But you see here is an instrumentation of this exact paper we"}, {"start": 353.44, "end": 359.52, "text": " have talked about which was made by weights and biases. I think organizing these experiments"}, {"start": 359.52, "end": 365.12, "text": " really showcases the usability of their system. Weight and biases provides tools to track your"}, {"start": 365.12, "end": 370.4, "text": " experiments in your deep learning projects. Their system is designed to save you a ton of time"}, {"start": 370.4, "end": 376.88, "text": " and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research,"}, {"start": 376.88, "end": 383.44, "text": " GitHub and more. And the best part is that if you have an open source, academic or personal project,"}, {"start": 383.44, "end": 389.28, "text": " you can use their tools for free. It really is as good as it gets. Make sure to visit them through"}, {"start": 389.28, "end": 395.52, "text": " www.nb.com slash papers or just click the link in the video description to start tracking your"}, {"start": 395.52, "end": 401.28, "text": " experiments in five minutes. Our thanks to weights and biases for their long standing support and"}, {"start": 401.28, "end": 405.92, "text": " for helping us make better videos for you. Thanks for watching and for your generous support"}, {"start": 405.92, "end": 407.92, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=3UZzu4UQLcI
NVIDIA’s AI Recreated PacMan! 👻
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers The mentioned blog post is available here: https://www.wandb.com/articles/visualizing-molecular-structure-with-weights-biases 📝 The paper "Learning to Simulate Dynamic Environments with GameGAN" is available here: https://nv-tlabs.github.io/gameGAN/ Our paper with the neural renderer is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #NVIDIA #GameGAN
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifahir. Neural network-based learning methods are capable of wondrous things these days. They can do classification, which means that they can look at an image and the output is a decision whether we see a dog or a cat or a sentence that describes an image. In the case of DeepMind's AI playing Atari games, the input is the video footage of the game and the output is an action that decides what we do with our character next. In OpenAI's amazing drug box paper, the input was a style of someone, a music genre, and lyrics and the output was a waveform or in other words a song we can listen to. But a few hundred episodes ago we covered a paper from 2015 where scientists at DeepMind asked the question, what if we would get these neural networks to output not sentences, decisions, waveforms or any of that sort, what if the output would be a computer program? Can we teach a neural network programming? I was convinced that the answer is no until I saw these results. So what is happening here? The input is a scratch pad where we are performing multi-digit addition in front of the curious size of the neural network. And if it has looked for long enough, it was indeed able to reproduce a computer program that could eventually perform addition. It could also perform sorting and would even be able to rotate the images of these cars into a target position. It was called a neural programmer, interpreter, and of course it was slow and a bit inefficient, but no matter because it could finally make something previously impossible, possible. That is an amazing leap. So why are we talking about this work from 2015? Well, apart from the fact that there are many amazing works that are timeless and this is one of them, in this series I always say two more papers down the line and it will be improved significantly. So here is the two minute papers moment of truth. How has this area improved with this follow-up work? Let's have a look at this paper from scientists at Nvidia that implements a similar concept for computer games. So how is that even possible? Normally, if we wish to write a computer game, we first envision the game in our mind, then we sit down and do the programming. But this new paper does this completely differently. Now, hold on to your papers because this is a neural network based method that first looks at someone playing the game and then it is able to implement the game so that it not only looks like it, but it also behaves the same way to our key presses. You see it at work here. Yes, this means that we can even play with it and it learns the internal rules of the game and the graphics just by looking at some gameplay. Note that the key part here is that we are not doing any programming by hand, the entirety of the program is written by the AI. We don't need access to the source code or the internal workings of the game as long as we can just look at it, it can learn the rules. Everything truly behaves as expected, we can even pick up the capsule and eat the ghosts as well. This already sounds like science fiction and we are not nearly done yet. There are additional goodies. It has memory and uses it consistently. In other words, things don't just happen arbitrarily. If we return to a state of the game that we visited before, it will remember to present us with very similar information. It also has an understanding of foreground and background, dynamic and static objects as well, so we can experiment with replacing these parts thereby re-skining our games. It still needs quite a bit of data to perform all this as it has looked at approximately 120 hours of footage of the game being played. However, now something is possible that was previously impossible. And of course, two more papers down the line, this will be improved significantly, I am sure. I think this work is going to be one of those important milestones that remind us that many of the things that we had handcrafted methods for will, over time, be replaced with these learning algorithms. They already know the physics of fluids, or in other words, they are already capable of looking at videos of these simulations and learn the underlying physical laws, and they can demonstrate having learned general knowledge of the rules by being able to continue these simulations even if we change the scene around quite a bit. In light transport research, we also have decades of progress in simulating how rays of light interact with scenes and we can create these beautiful images. Parts of these algorithms, for instance, noise filtering, are already taken over by AI-based techniques and I can't help but feel that a bigger tidal wave is coming. This tidal wave will be an entirely AI driven technique that will write the code for the entirety of the system. Sure, the first ones will be limited, for instance, this is a newer renderer from one of our papers that is limited to this scene and lighting setup, but you know the saying, two more papers down the line and it will be an order of magnitude better. I can't wait to tell all about it to you with a video when this happens. Make sure to subscribe and hit the bell icon to not miss any follow-up works. Goodness, I love my job. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to visualize molecular structures using their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you have an open source, academic or personal project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.com-slash-papers or just click the link in the video description to start tracking your experiments in 5 minutes. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifahir."}, {"start": 4.6000000000000005, "end": 9.28, "text": " Neural network-based learning methods are capable of wondrous things these days."}, {"start": 9.28, "end": 14.24, "text": " They can do classification, which means that they can look at an image and the output is"}, {"start": 14.24, "end": 20.96, "text": " a decision whether we see a dog or a cat or a sentence that describes an image."}, {"start": 20.96, "end": 26.2, "text": " In the case of DeepMind's AI playing Atari games, the input is the video footage of the"}, {"start": 26.2, "end": 32.48, "text": " game and the output is an action that decides what we do with our character next."}, {"start": 32.48, "end": 39.56, "text": " In OpenAI's amazing drug box paper, the input was a style of someone, a music genre, and"}, {"start": 39.56, "end": 47.28, "text": " lyrics and the output was a waveform or in other words a song we can listen to."}, {"start": 47.28, "end": 53.56, "text": " But a few hundred episodes ago we covered a paper from 2015 where scientists at DeepMind"}, {"start": 53.56, "end": 59.440000000000005, "text": " asked the question, what if we would get these neural networks to output not sentences,"}, {"start": 59.440000000000005, "end": 66.88, "text": " decisions, waveforms or any of that sort, what if the output would be a computer program?"}, {"start": 66.88, "end": 69.84, "text": " Can we teach a neural network programming?"}, {"start": 69.84, "end": 75.28, "text": " I was convinced that the answer is no until I saw these results."}, {"start": 75.28, "end": 77.4, "text": " So what is happening here?"}, {"start": 77.4, "end": 82.24000000000001, "text": " The input is a scratch pad where we are performing multi-digit addition in front of the"}, {"start": 82.24, "end": 84.83999999999999, "text": " curious size of the neural network."}, {"start": 84.83999999999999, "end": 90.08, "text": " And if it has looked for long enough, it was indeed able to reproduce a computer program"}, {"start": 90.08, "end": 93.8, "text": " that could eventually perform addition."}, {"start": 93.8, "end": 99.19999999999999, "text": " It could also perform sorting and would even be able to rotate the images of these cars"}, {"start": 99.19999999999999, "end": 101.03999999999999, "text": " into a target position."}, {"start": 101.03999999999999, "end": 107.03999999999999, "text": " It was called a neural programmer, interpreter, and of course it was slow and a bit inefficient,"}, {"start": 107.04, "end": 113.04, "text": " but no matter because it could finally make something previously impossible, possible."}, {"start": 113.04, "end": 115.32000000000001, "text": " That is an amazing leap."}, {"start": 115.32000000000001, "end": 118.52000000000001, "text": " So why are we talking about this work from 2015?"}, {"start": 118.52000000000001, "end": 123.80000000000001, "text": " Well, apart from the fact that there are many amazing works that are timeless and this"}, {"start": 123.80000000000001, "end": 128.88, "text": " is one of them, in this series I always say two more papers down the line and it will"}, {"start": 128.88, "end": 131.0, "text": " be improved significantly."}, {"start": 131.0, "end": 134.4, "text": " So here is the two minute papers moment of truth."}, {"start": 134.4, "end": 137.84, "text": " How has this area improved with this follow-up work?"}, {"start": 137.84, "end": 142.92000000000002, "text": " Let's have a look at this paper from scientists at Nvidia that implements a similar concept"}, {"start": 142.92000000000002, "end": 145.56, "text": " for computer games."}, {"start": 145.56, "end": 148.24, "text": " So how is that even possible?"}, {"start": 148.24, "end": 153.76, "text": " Normally, if we wish to write a computer game, we first envision the game in our mind,"}, {"start": 153.76, "end": 156.48000000000002, "text": " then we sit down and do the programming."}, {"start": 156.48000000000002, "end": 159.4, "text": " But this new paper does this completely differently."}, {"start": 159.4, "end": 164.64000000000001, "text": " Now, hold on to your papers because this is a neural network based method that first"}, {"start": 164.64000000000001, "end": 170.04000000000002, "text": " looks at someone playing the game and then it is able to implement the game so that it"}, {"start": 170.04000000000002, "end": 176.24, "text": " not only looks like it, but it also behaves the same way to our key presses."}, {"start": 176.24, "end": 177.88, "text": " You see it at work here."}, {"start": 177.88, "end": 182.88, "text": " Yes, this means that we can even play with it and it learns the internal rules of the"}, {"start": 182.88, "end": 187.28, "text": " game and the graphics just by looking at some gameplay."}, {"start": 187.28, "end": 192.64000000000001, "text": " Note that the key part here is that we are not doing any programming by hand, the entirety"}, {"start": 192.64000000000001, "end": 195.44, "text": " of the program is written by the AI."}, {"start": 195.44, "end": 199.84, "text": " We don't need access to the source code or the internal workings of the game as long"}, {"start": 199.84, "end": 203.28, "text": " as we can just look at it, it can learn the rules."}, {"start": 203.28, "end": 208.52, "text": " Everything truly behaves as expected, we can even pick up the capsule and eat the ghosts"}, {"start": 208.52, "end": 209.84, "text": " as well."}, {"start": 209.84, "end": 214.56, "text": " This already sounds like science fiction and we are not nearly done yet."}, {"start": 214.56, "end": 216.32, "text": " There are additional goodies."}, {"start": 216.32, "end": 219.35999999999999, "text": " It has memory and uses it consistently."}, {"start": 219.35999999999999, "end": 223.04, "text": " In other words, things don't just happen arbitrarily."}, {"start": 223.04, "end": 227.88, "text": " If we return to a state of the game that we visited before, it will remember to present"}, {"start": 227.88, "end": 230.6, "text": " us with very similar information."}, {"start": 230.6, "end": 236.0, "text": " It also has an understanding of foreground and background, dynamic and static objects"}, {"start": 236.0, "end": 242.95999999999998, "text": " as well, so we can experiment with replacing these parts thereby re-skining our games."}, {"start": 242.96, "end": 247.8, "text": " It still needs quite a bit of data to perform all this as it has looked at approximately"}, {"start": 247.8, "end": 251.76000000000002, "text": " 120 hours of footage of the game being played."}, {"start": 251.76000000000002, "end": 257.0, "text": " However, now something is possible that was previously impossible."}, {"start": 257.0, "end": 261.52, "text": " And of course, two more papers down the line, this will be improved significantly, I am"}, {"start": 261.52, "end": 262.52, "text": " sure."}, {"start": 262.52, "end": 267.32, "text": " I think this work is going to be one of those important milestones that remind us that"}, {"start": 267.32, "end": 273.0, "text": " many of the things that we had handcrafted methods for will, over time, be replaced with"}, {"start": 273.0, "end": 275.0, "text": " these learning algorithms."}, {"start": 275.0, "end": 280.15999999999997, "text": " They already know the physics of fluids, or in other words, they are already capable"}, {"start": 280.15999999999997, "end": 285.68, "text": " of looking at videos of these simulations and learn the underlying physical laws, and"}, {"start": 285.68, "end": 290.68, "text": " they can demonstrate having learned general knowledge of the rules by being able to continue"}, {"start": 290.68, "end": 295.6, "text": " these simulations even if we change the scene around quite a bit."}, {"start": 295.6, "end": 301.04, "text": " In light transport research, we also have decades of progress in simulating how rays of light"}, {"start": 301.04, "end": 305.28000000000003, "text": " interact with scenes and we can create these beautiful images."}, {"start": 305.28000000000003, "end": 310.44, "text": " Parts of these algorithms, for instance, noise filtering, are already taken over by AI-based"}, {"start": 310.44, "end": 315.76000000000005, "text": " techniques and I can't help but feel that a bigger tidal wave is coming."}, {"start": 315.76000000000005, "end": 321.04, "text": " This tidal wave will be an entirely AI driven technique that will write the code for the"}, {"start": 321.04, "end": 322.88, "text": " entirety of the system."}, {"start": 322.88, "end": 328.08, "text": " Sure, the first ones will be limited, for instance, this is a newer renderer from one of our"}, {"start": 328.08, "end": 333.32, "text": " papers that is limited to this scene and lighting setup, but you know the saying, two more papers"}, {"start": 333.32, "end": 336.88, "text": " down the line and it will be an order of magnitude better."}, {"start": 336.88, "end": 341.4, "text": " I can't wait to tell all about it to you with a video when this happens."}, {"start": 341.4, "end": 345.96, "text": " Make sure to subscribe and hit the bell icon to not miss any follow-up works."}, {"start": 345.96, "end": 348.92, "text": " Goodness, I love my job."}, {"start": 348.92, "end": 350.76, "text": " What a time to be alive!"}, {"start": 350.76, "end": 354.08, "text": " This episode has been supported by weights and biases."}, {"start": 354.08, "end": 359.56, "text": " In this post, they show you how to visualize molecular structures using their system."}, {"start": 359.56, "end": 364.2, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 364.2, "end": 368.96, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 368.96, "end": 375.28, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 375.28, "end": 380.15999999999997, "text": " And the best part is that if you have an open source, academic or personal project, you"}, {"start": 380.16, "end": 382.20000000000005, "text": " can use their tools for free."}, {"start": 382.20000000000005, "end": 384.76000000000005, "text": " It really is as good as it gets."}, {"start": 384.76000000000005, "end": 391.16, "text": " Make sure to visit them through www.com-slash-papers or just click the link in the video description"}, {"start": 391.16, "end": 394.32000000000005, "text": " to start tracking your experiments in 5 minutes."}, {"start": 394.32000000000005, "end": 399.04, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 399.04, "end": 400.40000000000003, "text": " better videos for you."}, {"start": 400.4, "end": 428.03999999999996, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=oYtwCZx5rsU
Surprise Video With Our New Paper On Material Editing! 🔮
📝 Our "Photorealistic Material Editing Through Direct Image Manipulation" paper and its source code are available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-editing/ The previous paper with the microplanet scene is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Gordon Child, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #NeuralRendering
Dear Fellow Scholars, this is not two-minute papers with Dr. Karo Zonai Fahir, due to popular demand this is a surprise video with the talk of our new paper that we just published. This was the third and last paper in my PhD thesis and hence this is going to be a one-off video that is longer and a tiny bit more technical, I am keenly aware of it but I hope you'll enjoy it. Let me know in the comments when you have finished the video and worry not, all the upcoming videos are going to be in the usual two-minute papers format. The paper and the source code are all available in the video description. And now let's dive in. In a previous paper our goal was to populate this scene with over a hundred materials with a learning-based technique and create a beautiful planet with rich vegetation. The results looked like this. One of the key elements to accomplish this was to use a neural renderer or in other words the decoder network that you see here which took a material shader description as an input and predicted its appearance thereby replacing the renderer we used in the project. It had its own limitations for instance it was limited to this scene with a fixed lighting setup and only the material properties were subject to change. But in return it mimicked the global illumination renderer rapidly and faithfully. And in this new work our goal was to take a different vantage point and help artists with general image processing knowledge to perform material synthesis. Now this sounds a little nebulous so let me explain. One of the key ideas is to achieve this with a system that is meant to take images from its own renderer like the ones you see here. But of course we produce these ourselves so obviously we know how to do it so this is not very useful yet. However the twist is that we only start out with an image of this source material and then load it into a RESTOR image editing program like Photoshop and edit it to our liking and just pretend that this is achievable with our renderer. As you see many of these target images in the middle are results of poorly executed edits. For instance the stitched specular highlight in the first example isn't very well done and neither is the background of the gold target image in the middle. However in the next step our method proceeds to find a photorealistic material description that when rendered resembles this target image and works well even in the presence of these poorly executed edits. The whole process executes in 20 seconds. To produce a mathematical formulation for this problem we started with this. We have an input image t and edited to our liking to get the target image t with a tilt. Now we are looking for a shader parameter set x that when rendered with the phi operator approximates the edited image. The constraint below stipulates that we would remain within the physical boundaries for each parameter for instance albedos between 0 and 1 proper indices of refraction and so on. So how do we deal with phi? We used the previously mentioned neural renderer to implement it otherwise this optimization process would take 25 hours. Later we made an equivalent and constrained reformulation of this problem to be able to accommodate a greater set of optimizers. This all sounds great on paper and works reasonably well for materials that can be exactly matched with this shader like this one. This optimizer based solution can achieve it reasonably well. But unfortunately for more challenging cases as you see the target image on the lower right the optimizer's output leaves much to be desired. Note again that the result on the upper right is achievable with the shader while the lower right is a challenging imaginary material that we are trying to achieve. The fact that this is quite difficult is not a surprise because we have an only near and non-convex optimization problem which is also high dimensional. So this optimization solution is also quite slow but it can start inching towards the target image. As an alternative solution we also developed something that we call an inversion network this addresses the adjoint problem of neural rendering or in other words we show it the edited input image and outcomes the shader that would produce this image. We have trained 9 different neural network architectures for this problem which sounds great so how well did it work? Well we found out that none of them are really satisfactory for more difficult edits because all of the target images are far far outside of the training domain. We just cannot prepare the networks to be able to handle the rich variety of edits that come from the artist. However some of them are one could say almost usable for instance number one and five are not complete failures and note that these solutions are provided instantly. So we have two techniques none of them are perfect for our task a fast and approximate solution with the inversion networks and a slower optimizer that can slowly inch towards the target image. Our key insight here is that we can produce a hybrid method that fuses the two solutions together. The workflow goes as follows. We take an image of the initial source material and edit it to our liking to get this target image. Then we create a course prediction with a selection of inversion networks to initialize the optimizer with the prediction of one of these neural networks preferably a good one so the optimizer can start out from a reasonable initial guess. So how well does this hybrid method work? I'll show you in a moment here we start out with an achievable target image and then try two challenging image editing operations. This image can be reproduced perfectly as long as the inversion process works reliably. Unfortunately as you see here this is not the case. In the first row using the optimizer and the inversion networks separately we get results that fail to capture the specular highlight properly. In the second row we have deleted the specular highlight on the target image on the right and replaced it with a completely different one. I like to call this the Franken BRDF and it would be amazing if we could do this but unfortunately both the optimizer and the inversion networks flounder. Another thing that would be really nice to do is deleting the specular highlight and filling the image via image in painting. This kind of works with the optimizer but you'll see in a moment that it's not nearly as good as it could be. And now if you look carefully you see that our hybrid method outperforms both of these techniques in each of the three cases. In the paper we report results on a dozen more cases as well. We make an even stronger claim in the paper where we say that these results are close to the global optimum. You see the results of this hybrid method if you look at the intersection of now the meet and and then they are highlighted with the red ellipses. The records in the table show the RMS errors and are subject to minimization. With this you see that this goes neck and neck with the global optimizer which is highlighted with green. In summary our technique runs in approximately 20 seconds works for specular highlight editing, image blending, stitching, in painting and more. The proposed method is robust works even in the presence of poorly edited images and can be easily deployed in already existing rendering systems and allows for rapid material prototyping for artists working in the industry. It is also independent of the underlying principle shader so you can also add your own and expect it to work well as long as the neural renderer works reliably. A key limitation of the work is that it only takes images in this canonical scene with a carved sphere material sample but we can juxture that it can be extended to be more general and propose a way to do it in the paper. Make sure to have a closer look if you are interested. The teaser image of this paper is showcased in the 2020 computer graphics forum cover page. The whole thing is also quite simple to implement and we provide the source code, pre-trained networks on our website and all of them are under a permissive license. Thank you so much for watching this and the big thanks to Petavanka and Michaal Vima for advising this work.
[{"start": 0.0, "end": 5.92, "text": " Dear Fellow Scholars, this is not two-minute papers with Dr. Karo Zonai Fahir, due to popular"}, {"start": 5.92, "end": 11.56, "text": " demand this is a surprise video with the talk of our new paper that we just published."}, {"start": 11.56, "end": 16.92, "text": " This was the third and last paper in my PhD thesis and hence this is going to be a one-off"}, {"start": 16.92, "end": 23.12, "text": " video that is longer and a tiny bit more technical, I am keenly aware of it but I hope"}, {"start": 23.12, "end": 24.36, "text": " you'll enjoy it."}, {"start": 24.36, "end": 28.64, "text": " Let me know in the comments when you have finished the video and worry not, all the upcoming"}, {"start": 28.64, "end": 32.84, "text": " videos are going to be in the usual two-minute papers format."}, {"start": 32.84, "end": 37.120000000000005, "text": " The paper and the source code are all available in the video description."}, {"start": 37.120000000000005, "end": 38.92, "text": " And now let's dive in."}, {"start": 38.92, "end": 43.84, "text": " In a previous paper our goal was to populate this scene with over a hundred materials"}, {"start": 43.84, "end": 49.120000000000005, "text": " with a learning-based technique and create a beautiful planet with rich vegetation."}, {"start": 49.120000000000005, "end": 51.519999999999996, "text": " The results looked like this."}, {"start": 51.519999999999996, "end": 56.92, "text": " One of the key elements to accomplish this was to use a neural renderer or in other words"}, {"start": 56.92, "end": 62.36, "text": " the decoder network that you see here which took a material shader description as an input"}, {"start": 62.36, "end": 68.28, "text": " and predicted its appearance thereby replacing the renderer we used in the project."}, {"start": 68.28, "end": 72.96000000000001, "text": " It had its own limitations for instance it was limited to this scene with a fixed lighting"}, {"start": 72.96000000000001, "end": 77.56, "text": " setup and only the material properties were subject to change."}, {"start": 77.56, "end": 83.48, "text": " But in return it mimicked the global illumination renderer rapidly and faithfully."}, {"start": 83.48, "end": 88.28, "text": " And in this new work our goal was to take a different vantage point and help artists"}, {"start": 88.28, "end": 92.60000000000001, "text": " with general image processing knowledge to perform material synthesis."}, {"start": 92.60000000000001, "end": 96.88000000000001, "text": " Now this sounds a little nebulous so let me explain."}, {"start": 96.88000000000001, "end": 101.92, "text": " One of the key ideas is to achieve this with a system that is meant to take images from"}, {"start": 101.92, "end": 105.36, "text": " its own renderer like the ones you see here."}, {"start": 105.36, "end": 111.04, "text": " But of course we produce these ourselves so obviously we know how to do it so this is"}, {"start": 111.04, "end": 112.60000000000001, "text": " not very useful yet."}, {"start": 112.6, "end": 118.75999999999999, "text": " However the twist is that we only start out with an image of this source material and then"}, {"start": 118.75999999999999, "end": 124.47999999999999, "text": " load it into a RESTOR image editing program like Photoshop and edit it to our liking and"}, {"start": 124.47999999999999, "end": 128.84, "text": " just pretend that this is achievable with our renderer."}, {"start": 128.84, "end": 133.64, "text": " As you see many of these target images in the middle are results of poorly executed"}, {"start": 133.64, "end": 134.88, "text": " edits."}, {"start": 134.88, "end": 139.88, "text": " For instance the stitched specular highlight in the first example isn't very well done"}, {"start": 139.88, "end": 144.12, "text": " and neither is the background of the gold target image in the middle."}, {"start": 144.12, "end": 150.07999999999998, "text": " However in the next step our method proceeds to find a photorealistic material description"}, {"start": 150.07999999999998, "end": 155.79999999999998, "text": " that when rendered resembles this target image and works well even in the presence of"}, {"start": 155.79999999999998, "end": 157.92, "text": " these poorly executed edits."}, {"start": 157.92, "end": 161.16, "text": " The whole process executes in 20 seconds."}, {"start": 161.16, "end": 165.88, "text": " To produce a mathematical formulation for this problem we started with this."}, {"start": 165.88, "end": 172.35999999999999, "text": " We have an input image t and edited to our liking to get the target image t with a tilt."}, {"start": 172.35999999999999, "end": 178.2, "text": " Now we are looking for a shader parameter set x that when rendered with the phi operator"}, {"start": 178.2, "end": 180.56, "text": " approximates the edited image."}, {"start": 180.56, "end": 184.96, "text": " The constraint below stipulates that we would remain within the physical boundaries for"}, {"start": 184.96, "end": 190.92, "text": " each parameter for instance albedos between 0 and 1 proper indices of refraction and"}, {"start": 190.92, "end": 192.12, "text": " so on."}, {"start": 192.12, "end": 194.72, "text": " So how do we deal with phi?"}, {"start": 194.72, "end": 199.76, "text": " We used the previously mentioned neural renderer to implement it otherwise this optimization"}, {"start": 199.76, "end": 202.56, "text": " process would take 25 hours."}, {"start": 202.56, "end": 207.28, "text": " Later we made an equivalent and constrained reformulation of this problem to be able"}, {"start": 207.28, "end": 210.56, "text": " to accommodate a greater set of optimizers."}, {"start": 210.56, "end": 216.2, "text": " This all sounds great on paper and works reasonably well for materials that can be exactly"}, {"start": 216.2, "end": 218.96, "text": " matched with this shader like this one."}, {"start": 218.96, "end": 224.0, "text": " This optimizer based solution can achieve it reasonably well."}, {"start": 224.0, "end": 229.08, "text": " But unfortunately for more challenging cases as you see the target image on the lower"}, {"start": 229.08, "end": 234.12, "text": " right the optimizer's output leaves much to be desired."}, {"start": 234.12, "end": 238.96, "text": " Note again that the result on the upper right is achievable with the shader while the lower"}, {"start": 238.96, "end": 244.08, "text": " right is a challenging imaginary material that we are trying to achieve."}, {"start": 244.08, "end": 248.88, "text": " The fact that this is quite difficult is not a surprise because we have an only near"}, {"start": 248.88, "end": 253.56, "text": " and non-convex optimization problem which is also high dimensional."}, {"start": 253.56, "end": 259.16, "text": " So this optimization solution is also quite slow but it can start inching towards the target"}, {"start": 259.16, "end": 260.48, "text": " image."}, {"start": 260.48, "end": 265.36, "text": " As an alternative solution we also developed something that we call an inversion network"}, {"start": 265.36, "end": 270.84000000000003, "text": " this addresses the adjoint problem of neural rendering or in other words we show it the"}, {"start": 270.84000000000003, "end": 275.8, "text": " edited input image and outcomes the shader that would produce this image."}, {"start": 275.8, "end": 280.76, "text": " We have trained 9 different neural network architectures for this problem which sounds great"}, {"start": 280.76, "end": 283.88, "text": " so how well did it work?"}, {"start": 283.88, "end": 289.12, "text": " Well we found out that none of them are really satisfactory for more difficult edits because"}, {"start": 289.12, "end": 293.52, "text": " all of the target images are far far outside of the training domain."}, {"start": 293.52, "end": 298.12, "text": " We just cannot prepare the networks to be able to handle the rich variety of edits that"}, {"start": 298.12, "end": 299.76, "text": " come from the artist."}, {"start": 299.76, "end": 306.15999999999997, "text": " However some of them are one could say almost usable for instance number one and five"}, {"start": 306.16, "end": 311.6, "text": " are not complete failures and note that these solutions are provided instantly."}, {"start": 311.6, "end": 317.52000000000004, "text": " So we have two techniques none of them are perfect for our task a fast and approximate solution"}, {"start": 317.52000000000004, "end": 322.84000000000003, "text": " with the inversion networks and a slower optimizer that can slowly inch towards the target"}, {"start": 322.84000000000003, "end": 323.84000000000003, "text": " image."}, {"start": 323.84000000000003, "end": 328.64000000000004, "text": " Our key insight here is that we can produce a hybrid method that fuses the two solutions"}, {"start": 328.64000000000004, "end": 329.64000000000004, "text": " together."}, {"start": 329.64000000000004, "end": 331.64000000000004, "text": " The workflow goes as follows."}, {"start": 331.64, "end": 336.91999999999996, "text": " We take an image of the initial source material and edit it to our liking to get this target"}, {"start": 336.91999999999996, "end": 337.91999999999996, "text": " image."}, {"start": 337.91999999999996, "end": 342.64, "text": " Then we create a course prediction with a selection of inversion networks to initialize"}, {"start": 342.64, "end": 347.47999999999996, "text": " the optimizer with the prediction of one of these neural networks preferably a good"}, {"start": 347.47999999999996, "end": 352.12, "text": " one so the optimizer can start out from a reasonable initial guess."}, {"start": 352.12, "end": 354.91999999999996, "text": " So how well does this hybrid method work?"}, {"start": 354.91999999999996, "end": 361.03999999999996, "text": " I'll show you in a moment here we start out with an achievable target image and then try"}, {"start": 361.04, "end": 364.04, "text": " two challenging image editing operations."}, {"start": 364.04, "end": 369.92, "text": " This image can be reproduced perfectly as long as the inversion process works reliably."}, {"start": 369.92, "end": 373.44, "text": " Unfortunately as you see here this is not the case."}, {"start": 373.44, "end": 378.88, "text": " In the first row using the optimizer and the inversion networks separately we get results"}, {"start": 378.88, "end": 383.40000000000003, "text": " that fail to capture the specular highlight properly."}, {"start": 383.40000000000003, "end": 388.0, "text": " In the second row we have deleted the specular highlight on the target image on the right"}, {"start": 388.0, "end": 390.84000000000003, "text": " and replaced it with a completely different one."}, {"start": 390.84, "end": 397.35999999999996, "text": " I like to call this the Franken BRDF and it would be amazing if we could do this but unfortunately"}, {"start": 397.35999999999996, "end": 402.64, "text": " both the optimizer and the inversion networks flounder."}, {"start": 402.64, "end": 406.76, "text": " Another thing that would be really nice to do is deleting the specular highlight and"}, {"start": 406.76, "end": 409.71999999999997, "text": " filling the image via image in painting."}, {"start": 409.71999999999997, "end": 414.15999999999997, "text": " This kind of works with the optimizer but you'll see in a moment that it's not nearly"}, {"start": 414.15999999999997, "end": 416.91999999999996, "text": " as good as it could be."}, {"start": 416.92, "end": 422.0, "text": " And now if you look carefully you see that our hybrid method outperforms both of these"}, {"start": 422.0, "end": 425.04, "text": " techniques in each of the three cases."}, {"start": 425.04, "end": 430.68, "text": " In the paper we report results on a dozen more cases as well."}, {"start": 430.68, "end": 435.48, "text": " We make an even stronger claim in the paper where we say that these results are close to"}, {"start": 435.48, "end": 437.08000000000004, "text": " the global optimum."}, {"start": 437.08000000000004, "end": 441.12, "text": " You see the results of this hybrid method if you look at the intersection of now the"}, {"start": 441.12, "end": 445.56, "text": " meet and and then they are highlighted with the red ellipses."}, {"start": 445.56, "end": 450.84, "text": " The records in the table show the RMS errors and are subject to minimization."}, {"start": 450.84, "end": 455.84, "text": " With this you see that this goes neck and neck with the global optimizer which is highlighted"}, {"start": 455.84, "end": 457.24, "text": " with green."}, {"start": 457.24, "end": 463.48, "text": " In summary our technique runs in approximately 20 seconds works for specular highlight editing,"}, {"start": 463.48, "end": 467.32, "text": " image blending, stitching, in painting and more."}, {"start": 467.32, "end": 472.44, "text": " The proposed method is robust works even in the presence of poorly edited images and"}, {"start": 472.44, "end": 477.64, "text": " can be easily deployed in already existing rendering systems and allows for rapid material"}, {"start": 477.64, "end": 480.84, "text": " prototyping for artists working in the industry."}, {"start": 480.84, "end": 486.24, "text": " It is also independent of the underlying principle shader so you can also add your own and expect"}, {"start": 486.24, "end": 490.68, "text": " it to work well as long as the neural renderer works reliably."}, {"start": 490.68, "end": 495.44, "text": " A key limitation of the work is that it only takes images in this canonical scene with"}, {"start": 495.44, "end": 501.15999999999997, "text": " a carved sphere material sample but we can juxture that it can be extended to be more general"}, {"start": 501.16, "end": 503.64000000000004, "text": " and propose a way to do it in the paper."}, {"start": 503.64000000000004, "end": 506.48, "text": " Make sure to have a closer look if you are interested."}, {"start": 506.48, "end": 512.6, "text": " The teaser image of this paper is showcased in the 2020 computer graphics forum cover page."}, {"start": 512.6, "end": 517.48, "text": " The whole thing is also quite simple to implement and we provide the source code, pre-trained"}, {"start": 517.48, "end": 522.28, "text": " networks on our website and all of them are under a permissive license."}, {"start": 522.28, "end": 526.64, "text": " Thank you so much for watching this and the big thanks to Petavanka and Michaal Vima"}, {"start": 526.64, "end": 533.64, "text": " for advising this work."}]
Two Minute Papers
https://www.youtube.com/watch?v=2Bw5f4vYL98
How Well Can DeepMind's AI Learn Physics? ⚛
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Learning to Simulate Complex Physics with Graph Networks" is available here: https://arxiv.org/abs/2002.09405 https://sites.google.com/view/learning-to-simulate/home#h.p_hjnaJ6k8y0wo 🌊 The thesis on fluids is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/fluid_control_msc_thesis/  ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifaher. If you have been watching this series for a while, you know very well that I love learning algorithms and fluid simulations. But do you know what I like even better? Learning algorithms applied to fluid simulations so I couldn't be happier with today's paper. We can create wondrous fluid simulations like the ones you see here by studying the laws of fluid motion from physics and write a computer program that contains these laws. As you see, the amount of detail we can simulate with these programs is nothing short of amazing. However, I just mentioned neural networks. If we can write a simulator that runs the laws of physics to create these programs, why would we need learning based algorithms? The answer is in this paper that we have discussed about 300 episodes ago. The goal was to show a neural network video footage of lots and lots of fluid and smoke simulations and have it learn how the dynamics work to the point that it can continue and guess how the behavior of a smoke path would change over time. We stopped the video and it would learn how to continue it, if you will. This definitely is an interesting take as normally we use neural networks to solve problems that are otherwise close to impossible to tackle. For instance, it is very hard if not impossible to create a handcrafted algorithm that detects cats reliably because we cannot really write down the mathematical description of a cat. However, these days we can easily teach a neural network to do that. But this test here is fundamentally different. Here, the neural networks are applied to solve something that we already know how to solve, especially given that if we use a neural network to perform this task, we have to train it, which is a long and arduous process. I hope to have convinced you that this is a bad, bad idea. Why would anyone bother to do that? Does this make any sense? Well, it does make a lot of sense. And the reason for that is that this training step only has to be done once and afterwards querying the neural network that is predicting what happens next in the simulation runs almost immediately. This takes way less time than calculating all the forces and pressures in the simulation while retaining high quality results. So we suddenly went from thinking that an idea is useless to being amazing. What are the weaknesses of the approach? Generalization. You see, these techniques, including a newer variant that you see here, can give us detailed simulations in real time or close to real time, but if we present them with something that is far outside of the cases that they had seen in the training domain, they will fail. This does not happen with our handcrafted techniques only to AI-based methods. So, onwards to this new technique, and you will see in just a moment that the key differentiator here is that its generalization capabilities are just astounding. Look here. Predicted results match the true simulation quite well. Let's look at it in slow motion too so we can evaluate it a little better. Looking great. But we have talked about superior generalization, so what about that? Well, it can also handle sand and goop simulations so that's a great step beyond just water and smoke. And now, have a look at this one. This is a scene with the boxes it has been trained on. And now, let's ask it to try to simulate the evolution of significantly different shapes. Wow. It not only does well with these previously unseen shapes, but it also handles their interactions really well. But there is more. We can also train it on a tiny domain with only a few particles, and then it is able to learn general concepts that we can reuse to simulate a much bigger domain and also with more particles. Fantastic. But there is even more. We can train it by showing how water behaves on these water ramps, and then let's remove the ramps and see if it understands what it has to do with all these particles. Yes, it does. Now let's give it something more difficult. I want more ramps. Yes. And now, even more ramps. Yes, I love it. Let's see if it can do it with sand too. Here's the ramp for the training. And let's try an hourglass now. Absolute witchcraft. And we are even being paid to do this. I can hardly believe this. The reason why you see so many particles in many of these views is because if we look under the hood, we see that the paper proposes a really cool graph-based method that represents the particles and they can pass messages to each other over these connections between them. This leads to a simple, general, and accurate model that truly is a force to be reckoned with. Now, this is a great leap in neural network-based physics simulations, but of course, not everything is perfect here. Its generalization capabilities have their limits. For instance, over longer timeframes, solids may get incorrectly deformed. However, I will quietly note that during my college years, I was also studying the beautiful Navier Stokes equations and even as a highly motivated student, it took several months to understand the theory and write my first fluid simulator. You can check out the thesis and the source code in the video description if you are interested. And to see that these neural networks could learn something very similar in a matter of days, every time I think about this, she goes round down my spine. Absolutely amazing. What a time to be alive! This episode has been supported by Lambda. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.84, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifaher."}, {"start": 4.84, "end": 9.46, "text": " If you have been watching this series for a while, you know very well that I love learning"}, {"start": 9.46, "end": 11.84, "text": " algorithms and fluid simulations."}, {"start": 11.84, "end": 15.200000000000001, "text": " But do you know what I like even better?"}, {"start": 15.200000000000001, "end": 21.88, "text": " Learning algorithms applied to fluid simulations so I couldn't be happier with today's paper."}, {"start": 21.88, "end": 27.12, "text": " We can create wondrous fluid simulations like the ones you see here by studying the laws"}, {"start": 27.12, "end": 32.84, "text": " of fluid motion from physics and write a computer program that contains these laws."}, {"start": 32.84, "end": 38.24, "text": " As you see, the amount of detail we can simulate with these programs is nothing short of amazing."}, {"start": 38.24, "end": 41.120000000000005, "text": " However, I just mentioned neural networks."}, {"start": 41.120000000000005, "end": 46.120000000000005, "text": " If we can write a simulator that runs the laws of physics to create these programs, why"}, {"start": 46.120000000000005, "end": 49.400000000000006, "text": " would we need learning based algorithms?"}, {"start": 49.400000000000006, "end": 54.28, "text": " The answer is in this paper that we have discussed about 300 episodes ago."}, {"start": 54.28, "end": 59.58, "text": " The goal was to show a neural network video footage of lots and lots of fluid and smoke"}, {"start": 59.58, "end": 65.68, "text": " simulations and have it learn how the dynamics work to the point that it can continue and"}, {"start": 65.68, "end": 70.08, "text": " guess how the behavior of a smoke path would change over time."}, {"start": 70.08, "end": 74.6, "text": " We stopped the video and it would learn how to continue it, if you will."}, {"start": 74.6, "end": 80.08, "text": " This definitely is an interesting take as normally we use neural networks to solve problems"}, {"start": 80.08, "end": 83.6, "text": " that are otherwise close to impossible to tackle."}, {"start": 83.6, "end": 89.28, "text": " For instance, it is very hard if not impossible to create a handcrafted algorithm that detects"}, {"start": 89.28, "end": 95.11999999999999, "text": " cats reliably because we cannot really write down the mathematical description of a cat."}, {"start": 95.11999999999999, "end": 99.91999999999999, "text": " However, these days we can easily teach a neural network to do that."}, {"start": 99.91999999999999, "end": 102.88, "text": " But this test here is fundamentally different."}, {"start": 102.88, "end": 108.88, "text": " Here, the neural networks are applied to solve something that we already know how to solve,"}, {"start": 108.88, "end": 114.32, "text": " especially given that if we use a neural network to perform this task, we have to train it,"}, {"start": 114.32, "end": 116.72, "text": " which is a long and arduous process."}, {"start": 116.72, "end": 120.28, "text": " I hope to have convinced you that this is a bad, bad idea."}, {"start": 120.28, "end": 122.92, "text": " Why would anyone bother to do that?"}, {"start": 122.92, "end": 124.8, "text": " Does this make any sense?"}, {"start": 124.8, "end": 127.6, "text": " Well, it does make a lot of sense."}, {"start": 127.6, "end": 132.88, "text": " And the reason for that is that this training step only has to be done once and afterwards"}, {"start": 132.88, "end": 138.44, "text": " querying the neural network that is predicting what happens next in the simulation runs almost"}, {"start": 138.44, "end": 140.0, "text": " immediately."}, {"start": 140.0, "end": 145.24, "text": " This takes way less time than calculating all the forces and pressures in the simulation"}, {"start": 145.24, "end": 148.16, "text": " while retaining high quality results."}, {"start": 148.16, "end": 154.07999999999998, "text": " So we suddenly went from thinking that an idea is useless to being amazing."}, {"start": 154.07999999999998, "end": 156.88, "text": " What are the weaknesses of the approach?"}, {"start": 156.88, "end": 158.04, "text": " Generalization."}, {"start": 158.04, "end": 163.32, "text": " You see, these techniques, including a newer variant that you see here, can give us detailed"}, {"start": 163.32, "end": 169.28, "text": " simulations in real time or close to real time, but if we present them with something that"}, {"start": 169.28, "end": 174.68, "text": " is far outside of the cases that they had seen in the training domain, they will fail."}, {"start": 174.68, "end": 179.6, "text": " This does not happen with our handcrafted techniques only to AI-based methods."}, {"start": 179.6, "end": 185.0, "text": " So, onwards to this new technique, and you will see in just a moment that the key differentiator"}, {"start": 185.0, "end": 189.88, "text": " here is that its generalization capabilities are just astounding."}, {"start": 189.88, "end": 190.88, "text": " Look here."}, {"start": 190.88, "end": 196.64, "text": " Predicted results match the true simulation quite well."}, {"start": 196.64, "end": 203.12, "text": " Let's look at it in slow motion too so we can evaluate it a little better."}, {"start": 203.12, "end": 204.72, "text": " Looking great."}, {"start": 204.72, "end": 209.48, "text": " But we have talked about superior generalization, so what about that?"}, {"start": 209.48, "end": 215.88, "text": " Well, it can also handle sand and goop simulations so that's a great step beyond just water and"}, {"start": 215.88, "end": 225.88, "text": " smoke."}, {"start": 225.88, "end": 228.07999999999998, "text": " And now, have a look at this one."}, {"start": 228.07999999999998, "end": 232.64, "text": " This is a scene with the boxes it has been trained on."}, {"start": 232.64, "end": 239.88, "text": " And now, let's ask it to try to simulate the evolution of significantly different shapes."}, {"start": 239.88, "end": 242.68, "text": " Wow."}, {"start": 242.68, "end": 248.68, "text": " It not only does well with these previously unseen shapes, but it also handles their interactions"}, {"start": 248.68, "end": 250.88, "text": " really well."}, {"start": 250.88, "end": 252.08, "text": " But there is more."}, {"start": 252.08, "end": 257.8, "text": " We can also train it on a tiny domain with only a few particles, and then it is able"}, {"start": 257.8, "end": 263.8, "text": " to learn general concepts that we can reuse to simulate a much bigger domain and also"}, {"start": 263.8, "end": 265.84000000000003, "text": " with more particles."}, {"start": 265.84000000000003, "end": 268.56, "text": " Fantastic."}, {"start": 268.56, "end": 270.36, "text": " But there is even more."}, {"start": 270.36, "end": 275.96000000000004, "text": " We can train it by showing how water behaves on these water ramps, and then let's remove"}, {"start": 275.96000000000004, "end": 281.44, "text": " the ramps and see if it understands what it has to do with all these particles."}, {"start": 281.44, "end": 283.24, "text": " Yes, it does."}, {"start": 283.24, "end": 286.16, "text": " Now let's give it something more difficult."}, {"start": 286.16, "end": 288.16, "text": " I want more ramps."}, {"start": 288.16, "end": 289.56, "text": " Yes."}, {"start": 289.56, "end": 293.56, "text": " And now, even more ramps."}, {"start": 293.56, "end": 298.64, "text": " Yes, I love it."}, {"start": 298.64, "end": 301.84, "text": " Let's see if it can do it with sand too."}, {"start": 301.84, "end": 304.03999999999996, "text": " Here's the ramp for the training."}, {"start": 304.03999999999996, "end": 309.0, "text": " And let's try an hourglass now."}, {"start": 309.0, "end": 310.28, "text": " Absolute witchcraft."}, {"start": 310.28, "end": 312.68, "text": " And we are even being paid to do this."}, {"start": 312.68, "end": 314.68, "text": " I can hardly believe this."}, {"start": 314.68, "end": 319.28, "text": " The reason why you see so many particles in many of these views is because if we look"}, {"start": 319.28, "end": 325.2, "text": " under the hood, we see that the paper proposes a really cool graph-based method that represents"}, {"start": 325.2, "end": 330.24, "text": " the particles and they can pass messages to each other over these connections between"}, {"start": 330.24, "end": 331.24, "text": " them."}, {"start": 331.24, "end": 336.64, "text": " This leads to a simple, general, and accurate model that truly is a force to be reckoned"}, {"start": 336.64, "end": 337.64, "text": " with."}, {"start": 337.64, "end": 342.36, "text": " Now, this is a great leap in neural network-based physics simulations, but of course,"}, {"start": 342.36, "end": 344.15999999999997, "text": " not everything is perfect here."}, {"start": 344.15999999999997, "end": 347.2, "text": " Its generalization capabilities have their limits."}, {"start": 347.2, "end": 352.76, "text": " For instance, over longer timeframes, solids may get incorrectly deformed."}, {"start": 352.76, "end": 359.28, "text": " However, I will quietly note that during my college years, I was also studying the beautiful"}, {"start": 359.28, "end": 365.24, "text": " Navier Stokes equations and even as a highly motivated student, it took several months to"}, {"start": 365.24, "end": 368.92, "text": " understand the theory and write my first fluid simulator."}, {"start": 368.92, "end": 373.68, "text": " You can check out the thesis and the source code in the video description if you are interested."}, {"start": 373.68, "end": 378.59999999999997, "text": " And to see that these neural networks could learn something very similar in a matter of"}, {"start": 378.6, "end": 383.68, "text": " days, every time I think about this, she goes round down my spine."}, {"start": 383.68, "end": 385.72, "text": " Absolutely amazing."}, {"start": 385.72, "end": 387.56, "text": " What a time to be alive!"}, {"start": 387.56, "end": 389.88, "text": " This episode has been supported by Lambda."}, {"start": 389.88, "end": 395.12, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 395.12, "end": 397.36, "text": " check out Lambda GPU Cloud."}, {"start": 397.36, "end": 402.40000000000003, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 402.40000000000003, "end": 405.8, "text": " that they are offering GPU Cloud services as well."}, {"start": 405.8, "end": 413.2, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 413.2, "end": 418.12, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 418.12, "end": 424.04, "text": " And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 424.04, "end": 426.2, "text": " AWS and Azure."}, {"start": 426.2, "end": 432.28000000000003, "text": " Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU"}, {"start": 432.28000000000003, "end": 433.68, "text": " instances today."}, {"start": 433.68, "end": 437.48, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6oQ0Obi14rM
OpenAI’s Jukebox AI Writes Amazing New Songs 🎼
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their instrumentation of this paper is available here: https://app.wandb.ai/authors/openai-jukebox/reports/Experiments-with-OpenAI-Jukebox--VmlldzoxMzQwODg 📝 The paper "Jukebox: A Generative Model for Music" is available here: https://openai.com/blog/jukebox/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1261793/ Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajon Aifahir. Today, I will attempt to tell you a glorious tale about AI-based music generation. You see, there is no shortage of neural network-based methods that can perform physics simulations, style transfer, deepfakes, and a lot more applications where the training data is typically images or video. If the training data for a neural network is in pure text, it can learn about that. If the training data is waveforms and music, it can learn that too. Wait, really? Yes. In fact, let's look at two examples and then dive into today's amazing paper. In this earlier work, by the name, Look, Listen, and Learn, two scientists, a deep mind, set out to look at a large number of videos with sound. You see here that there is a neural network for processing the vision and one for the audio information. That sounds great, but what are these heatmaps? These were created by this learning algorithm, and they show us which part of the image is responsible for the sounds that we hear in the video. The harder the color, the more sounds are expected from a given region. It was truly amazing that it didn't automatically look for humans and colored them red in the heatmap. There are cases where the humans are expected to be the source of the noise, for instance, in concerts where, in other cases, they don't emit any noise at all. It could successfully identify these cases. This still feels like science fiction to me, and we covered this paper in 2017 approximately 250 episodes ago. You will see that we have come a long, long way since. We often say that these neural networks should try to embody general learning concepts. That's an excellent, and in this case, testable statement, so let's go ahead and have a look under the hood of this vision and audio processing neural networks, and, yes, they are almost identical. Some parameters are not the same because they have been adapted to the length and dimensionality of the incoming data, but the key algorithm that we run for the learning is the same. Later, in 2018, DeepMind published a follow-up work that looks at performances on the piano from the masters of the past and learns to play in their style. A key differentiating factor here was that it did not do what most previous techniques do, which was looking at the score of the performance. These older techniques knew what to play, but not how to play these notes, and these are the nuances that truly make music come alive. This method learned from raw audio waveforms and thus could capture much, much more of the artistic style. Let's listen to it, and in the meantime, you can look at the composers it has learned from to produce these works. However, in 2019, OpenAI recognized the text-based music synthesizers can not only look at a piece of score, but can also continue it, thereby composing a new piece of music, and what's more, they could even create really cool blends between genres. Listen, as their AI starts out from the first six notes of the Chopin piece and transitions into a pop style with a bunch of different instruments entering a few seconds in. Very cool. The score-based techniques are a little lacking in nuance, but can do magical, genre mixing, and more, whereas the waveform-based techniques are more limited, but can create much more sophisticated music. Are you thinking what I am thinking? Yes, you have guessed right. Hold on to your papers, because in OpenAI's new work, they tried to fuse the two concepts together, or, in other words, take a genre, an artist, and even lyrics as an input, and it would create a song for us. Let's marvel at a few curated samples together. The genre, artist, and lyrics information will always be on the screen. Wow, I am speechless. Love the AI-based lyrics, too. This has the nuance of waveform-based techniques with the versatility of score-based methods. Glorious. If you look in the video description, you will find a selection of uncurated music samples as well. It does what it does by compressing the raw audio waveform into a compact representation. In this space, it is much easier to synthesize new patterns after which we can decompress it to get the output waveforms. It has also learned to group up and cluster a selection of artists which reflects how the AI thinks about them. There is so much cool stuff in here that it would be worthy of a video of its own. Note that it currently takes 9 hours to generate 1 minute of music, and the network was mainly trained on western music, and only speaks English, but you know, as we always say around here, 2 more papers down the line, and it will be improved significantly. I cannot wait to report on them, should any follow-up works appear, so make sure to subscribe and hit the bell icon to not miss it. What a time to be alive. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajon Aifahir."}, {"start": 4.6000000000000005, "end": 10.6, "text": " Today, I will attempt to tell you a glorious tale about AI-based music generation."}, {"start": 10.6, "end": 16.6, "text": " You see, there is no shortage of neural network-based methods that can perform physics simulations,"}, {"start": 16.6, "end": 25.3, "text": " style transfer, deepfakes, and a lot more applications where the training data is typically images or video."}, {"start": 25.3, "end": 30.5, "text": " If the training data for a neural network is in pure text, it can learn about that."}, {"start": 30.5, "end": 35.0, "text": " If the training data is waveforms and music, it can learn that too."}, {"start": 35.0, "end": 37.5, "text": " Wait, really? Yes."}, {"start": 37.5, "end": 43.0, "text": " In fact, let's look at two examples and then dive into today's amazing paper."}, {"start": 43.0, "end": 48.6, "text": " In this earlier work, by the name, Look, Listen, and Learn, two scientists, a deep mind,"}, {"start": 48.6, "end": 52.3, "text": " set out to look at a large number of videos with sound."}, {"start": 52.3, "end": 58.599999999999994, "text": " You see here that there is a neural network for processing the vision and one for the audio information."}, {"start": 58.599999999999994, "end": 61.9, "text": " That sounds great, but what are these heatmaps?"}, {"start": 61.9, "end": 70.3, "text": " These were created by this learning algorithm, and they show us which part of the image is responsible for the sounds that we hear in the video."}, {"start": 70.3, "end": 74.6, "text": " The harder the color, the more sounds are expected from a given region."}, {"start": 74.6, "end": 81.3, "text": " It was truly amazing that it didn't automatically look for humans and colored them red in the heatmap."}, {"start": 81.3, "end": 91.5, "text": " There are cases where the humans are expected to be the source of the noise, for instance, in concerts where, in other cases, they don't emit any noise at all."}, {"start": 91.5, "end": 94.5, "text": " It could successfully identify these cases."}, {"start": 94.5, "end": 103.9, "text": " This still feels like science fiction to me, and we covered this paper in 2017 approximately 250 episodes ago."}, {"start": 103.9, "end": 107.1, "text": " You will see that we have come a long, long way since."}, {"start": 107.1, "end": 112.89999999999999, "text": " We often say that these neural networks should try to embody general learning concepts."}, {"start": 112.89999999999999, "end": 125.69999999999999, "text": " That's an excellent, and in this case, testable statement, so let's go ahead and have a look under the hood of this vision and audio processing neural networks, and, yes, they are almost identical."}, {"start": 125.69999999999999, "end": 136.9, "text": " Some parameters are not the same because they have been adapted to the length and dimensionality of the incoming data, but the key algorithm that we run for the learning is the same."}, {"start": 136.9, "end": 147.9, "text": " Later, in 2018, DeepMind published a follow-up work that looks at performances on the piano from the masters of the past and learns to play in their style."}, {"start": 147.9, "end": 156.70000000000002, "text": " A key differentiating factor here was that it did not do what most previous techniques do, which was looking at the score of the performance."}, {"start": 156.70000000000002, "end": 165.5, "text": " These older techniques knew what to play, but not how to play these notes, and these are the nuances that truly make music come alive."}, {"start": 165.5, "end": 172.5, "text": " This method learned from raw audio waveforms and thus could capture much, much more of the artistic style."}, {"start": 172.5, "end": 201.5, "text": " Let's listen to it, and in the meantime, you can look at the composers it has learned from to produce these works."}, {"start": 203.5, "end": 221.5, "text": " However, in 2019, OpenAI recognized the text-based music synthesizers can not only look at a piece of score, but can also continue it, thereby composing a new piece of music, and what's more, they could even create really cool blends between genres."}, {"start": 221.5, "end": 232.5, "text": " Listen, as their AI starts out from the first six notes of the Chopin piece and transitions into a pop style with a bunch of different instruments entering a few seconds in."}, {"start": 232.5, "end": 261.5, "text": " Very cool. The score-based techniques are a little lacking in nuance, but can do magical, genre mixing, and more, whereas the waveform-based techniques are more limited, but can create much more sophisticated music."}, {"start": 261.5, "end": 280.5, "text": " Are you thinking what I am thinking? Yes, you have guessed right. Hold on to your papers, because in OpenAI's new work, they tried to fuse the two concepts together, or, in other words, take a genre, an artist, and even lyrics as an input, and it would create a song for us."}, {"start": 280.5, "end": 290.5, "text": " Let's marvel at a few curated samples together. The genre, artist, and lyrics information will always be on the screen."}, {"start": 311.5, "end": 339.5, "text": " Wow, I am speechless. Love the AI-based lyrics, too."}, {"start": 339.5, "end": 352.5, "text": " This has the nuance of waveform-based techniques with the versatility of score-based methods. Glorious. If you look in the video description, you will find a selection of uncurated music samples as well."}, {"start": 352.5, "end": 367.5, "text": " It does what it does by compressing the raw audio waveform into a compact representation. In this space, it is much easier to synthesize new patterns after which we can decompress it to get the output waveforms."}, {"start": 367.5, "end": 375.5, "text": " It has also learned to group up and cluster a selection of artists which reflects how the AI thinks about them."}, {"start": 375.5, "end": 395.5, "text": " There is so much cool stuff in here that it would be worthy of a video of its own. Note that it currently takes 9 hours to generate 1 minute of music, and the network was mainly trained on western music, and only speaks English, but you know, as we always say around here, 2 more papers down the line, and it will be improved significantly."}, {"start": 395.5, "end": 403.5, "text": " I cannot wait to report on them, should any follow-up works appear, so make sure to subscribe and hit the bell icon to not miss it."}, {"start": 403.5, "end": 412.5, "text": " What a time to be alive. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases."}, {"start": 412.5, "end": 418.5, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 418.5, "end": 434.5, "text": " Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 434.5, "end": 441.5, "text": " And the best part is that if you are an academic or have an open source project, you can use their tools for free."}, {"start": 441.5, "end": 452.5, "text": " It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today."}, {"start": 452.5, "end": 458.5, "text": " Our thanks to weights and biases for their long standing support, and for helping us make better videos for you."}, {"start": 458.5, "end": 471.5, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=qwAiLBPEt_k
This AI Controls Virtual Quadrupeds! 🐕
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their instrumentation for this previous work is available here: https://app.wandb.ai/sweep/nerf/reports/NeRF-%E2%80%93-Representing-Scenes-as-Neural-Radiance-Fields-for-View-Synthesis--Vmlldzo3ODIzMA 📝 The paper "CARL: Controllable Agent with Reinforcement Learning for Quadruped Locomotion" is available here: https://inventec-ai-center.github.io/projects/CARL/index.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. If we have an animation movie or a computer game with quadrupeds and we are yearning for really high quality, life-like animations, motion capture is often the go-to tool for the job. Motion capture means that we put an actor in our case a dog in the studio and we ask it to perform sitting, trotting, pacing and jumping, record its motion and transfer it onto our virtual character. There are two key challenges with this approach. One, we have to try to weave together all of these motions because we cannot record all the possible transitions between sitting and pacing, jumping and trotting and so on. We need some filler animations to make these transitions work. This was addressed by this neural network-based technique here. The other one is trying to reduce these unnatural foot-sliding motions. Both of these have been addressed by learning based algorithms in the previous works that you see here. Later, bipeds were also taught to maneuver through complex geometry and sit in not one kind of chair, but any chair regardless of geometry. This already sounds like science fiction. So, are we done or can these amazing techniques be further improved? Well, we are talking about research, so the answer is, of course, yes. Here, you see a technique that reacts to its environment in a believable manner. It can accidentally stop on the ball, stagger a little bit, and then flounder on this slippery surface, and it doesn't fall, and it can do much, much more. The goal is that we would be able to do all this without explicitly programming all of these behaviors by hand, but unfortunately, there is a problem. If we write an agent that behaves according to physics, it will be difficult to control properly. And this is where these new techniques shines. It gives us physically appealing motion, and we can grab a controller and play with the character, like in a video game. The first step we need to perform is called imitation learning. This means looking at real reference movement data and trying to reproduce it. This is going to be motion that looks great, is very natural, however, we are nowhere near done because we still don't have any control over this agent. Can we improve this somehow? Well, let's try something and see if it works. This paper proposes that in step number two, we try an architecture by the name generative adversarial network. Here, we have a neural network that generates motion and the discriminator that looks at these motions and tries to tell what is real and what is fake. However, to accomplish this, we need lots of real and fake data that we then use to train the discriminator to be able to tell which one is which. So, how do we do that? Well, let's try to label the movement that came from the user controller inputs as fake and the reference movement data from before as real. Remember that this makes sense as we concluded that the reference motion looked natural. If we do this, over time, we will have a discriminator network that is able to look at a piece of animation data and tell whether it is real or fake. So, after doing all this work, how does this perform? Does this work? Well, sort of, but it does not react well if we try to control the simulation. If we let it run undisturbed, it works beautifully and now, when we try to stop it with the controller, well, this needs some more work, doesn't it? So, how do we adapt this architecture to the animation problem that we have here? And here comes one of the key ideas of the paper. In step number three, we can revire this whole thing to originate from the controller and introduce a deep reinforcement learning based fine tuning stage. This was the amazing technique that DeepMind used to defeat Atari Breakout. So, what good does all this for us? Well, hold on to your papers because it enables true user control while synthesizing motion that is very robust against tough previously unseen scenarios. And if you have been watching this series for a while, you know what is coming? Of course, throwing blocks at it and see how well it can take the punishment. As you see, the AI is taking it like a champ. We can also add pathfinding to the agent and, of course, being computer graphics researchers throws some blocks into the mix for good measure. It performs beautifully. This is so realistic. We can also add sensors to the agent to allow them to navigate in this virtual world in a realistic manner. Just a note on how remarkable this is. So, this quadruped behaves according to physics, lets us control it with the controller, which is already somewhat of a contradiction. And it is robust against these perturbations at the same time. This is absolute witchcraft and no doubt it has earned to be accepted to C-Graph which is perhaps the most prestigious research venue in computer graphics. Congratulations! What you see here is an instrumentation for a previous paper that we covered in this series which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you are an academic or have an open source project you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.nb.com slash papers or just click the link in the video description and you can get the free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 8.1, "text": " If we have an animation movie or a computer game with quadrupeds"}, {"start": 8.1, "end": 12.1, "text": " and we are yearning for really high quality, life-like animations,"}, {"start": 12.1, "end": 15.8, "text": " motion capture is often the go-to tool for the job."}, {"start": 15.8, "end": 20.8, "text": " Motion capture means that we put an actor in our case a dog in the studio"}, {"start": 20.8, "end": 24.8, "text": " and we ask it to perform sitting, trotting, pacing and jumping,"}, {"start": 24.8, "end": 29.3, "text": " record its motion and transfer it onto our virtual character."}, {"start": 29.3, "end": 31.8, "text": " There are two key challenges with this approach."}, {"start": 31.8, "end": 35.8, "text": " One, we have to try to weave together all of these motions"}, {"start": 35.8, "end": 40.4, "text": " because we cannot record all the possible transitions between sitting and pacing,"}, {"start": 40.4, "end": 42.9, "text": " jumping and trotting and so on."}, {"start": 42.9, "end": 46.7, "text": " We need some filler animations to make these transitions work."}, {"start": 46.7, "end": 50.1, "text": " This was addressed by this neural network-based technique here."}, {"start": 50.1, "end": 54.6, "text": " The other one is trying to reduce these unnatural foot-sliding motions."}, {"start": 54.6, "end": 57.7, "text": " Both of these have been addressed by learning based algorithms"}, {"start": 57.7, "end": 60.300000000000004, "text": " in the previous works that you see here."}, {"start": 60.300000000000004, "end": 65.10000000000001, "text": " Later, bipeds were also taught to maneuver through complex geometry"}, {"start": 65.10000000000001, "end": 70.7, "text": " and sit in not one kind of chair, but any chair regardless of geometry."}, {"start": 70.7, "end": 73.60000000000001, "text": " This already sounds like science fiction."}, {"start": 73.60000000000001, "end": 78.80000000000001, "text": " So, are we done or can these amazing techniques be further improved?"}, {"start": 78.80000000000001, "end": 83.5, "text": " Well, we are talking about research, so the answer is, of course, yes."}, {"start": 83.5, "end": 88.2, "text": " Here, you see a technique that reacts to its environment in a believable manner."}, {"start": 88.2, "end": 93.5, "text": " It can accidentally stop on the ball, stagger a little bit,"}, {"start": 93.5, "end": 98.1, "text": " and then flounder on this slippery surface,"}, {"start": 98.1, "end": 102.6, "text": " and it doesn't fall, and it can do much, much more."}, {"start": 102.6, "end": 107.3, "text": " The goal is that we would be able to do all this without explicitly programming"}, {"start": 107.3, "end": 112.8, "text": " all of these behaviors by hand, but unfortunately, there is a problem."}, {"start": 112.8, "end": 116.1, "text": " If we write an agent that behaves according to physics,"}, {"start": 116.1, "end": 118.5, "text": " it will be difficult to control properly."}, {"start": 118.5, "end": 121.0, "text": " And this is where these new techniques shines."}, {"start": 121.0, "end": 123.39999999999999, "text": " It gives us physically appealing motion,"}, {"start": 123.39999999999999, "end": 128.7, "text": " and we can grab a controller and play with the character, like in a video game."}, {"start": 128.7, "end": 132.6, "text": " The first step we need to perform is called imitation learning."}, {"start": 132.6, "end": 137.5, "text": " This means looking at real reference movement data and trying to reproduce it."}, {"start": 137.5, "end": 142.7, "text": " This is going to be motion that looks great, is very natural, however,"}, {"start": 142.7, "end": 147.8, "text": " we are nowhere near done because we still don't have any control over this agent."}, {"start": 147.8, "end": 149.6, "text": " Can we improve this somehow?"}, {"start": 149.6, "end": 152.9, "text": " Well, let's try something and see if it works."}, {"start": 152.9, "end": 155.7, "text": " This paper proposes that in step number two,"}, {"start": 155.7, "end": 160.3, "text": " we try an architecture by the name generative adversarial network."}, {"start": 160.3, "end": 163.3, "text": " Here, we have a neural network that generates motion"}, {"start": 163.3, "end": 166.1, "text": " and the discriminator that looks at these motions"}, {"start": 166.1, "end": 169.5, "text": " and tries to tell what is real and what is fake."}, {"start": 169.5, "end": 174.5, "text": " However, to accomplish this, we need lots of real and fake data"}, {"start": 174.5, "end": 179.4, "text": " that we then use to train the discriminator to be able to tell which one is which."}, {"start": 179.4, "end": 181.5, "text": " So, how do we do that?"}, {"start": 181.5, "end": 187.4, "text": " Well, let's try to label the movement that came from the user controller inputs as fake"}, {"start": 187.4, "end": 190.7, "text": " and the reference movement data from before as real."}, {"start": 190.7, "end": 196.1, "text": " Remember that this makes sense as we concluded that the reference motion looked natural."}, {"start": 196.1, "end": 199.89999999999998, "text": " If we do this, over time, we will have a discriminator network"}, {"start": 199.89999999999998, "end": 203.0, "text": " that is able to look at a piece of animation data"}, {"start": 203.0, "end": 206.0, "text": " and tell whether it is real or fake."}, {"start": 206.0, "end": 209.89999999999998, "text": " So, after doing all this work, how does this perform?"}, {"start": 209.89999999999998, "end": 211.29999999999998, "text": " Does this work?"}, {"start": 211.29999999999998, "end": 217.1, "text": " Well, sort of, but it does not react well if we try to control the simulation."}, {"start": 217.1, "end": 220.9, "text": " If we let it run undisturbed, it works beautifully"}, {"start": 220.9, "end": 224.29999999999998, "text": " and now, when we try to stop it with the controller,"}, {"start": 226.5, "end": 229.1, "text": " well, this needs some more work, doesn't it?"}, {"start": 229.1, "end": 234.1, "text": " So, how do we adapt this architecture to the animation problem that we have here?"}, {"start": 234.1, "end": 237.6, "text": " And here comes one of the key ideas of the paper."}, {"start": 237.6, "end": 244.1, "text": " In step number three, we can revire this whole thing to originate from the controller"}, {"start": 244.1, "end": 248.29999999999998, "text": " and introduce a deep reinforcement learning based fine tuning stage."}, {"start": 248.29999999999998, "end": 253.29999999999998, "text": " This was the amazing technique that DeepMind used to defeat Atari Breakout."}, {"start": 253.29999999999998, "end": 256.1, "text": " So, what good does all this for us?"}, {"start": 256.1, "end": 260.7, "text": " Well, hold on to your papers because it enables true user control"}, {"start": 260.7, "end": 266.3, "text": " while synthesizing motion that is very robust against tough previously unseen scenarios."}, {"start": 266.3, "end": 270.1, "text": " And if you have been watching this series for a while, you know what is coming?"}, {"start": 270.1, "end": 275.1, "text": " Of course, throwing blocks at it and see how well it can take the punishment."}, {"start": 275.1, "end": 279.1, "text": " As you see, the AI is taking it like a champ."}, {"start": 281.6, "end": 284.1, "text": " We can also add pathfinding to the agent"}, {"start": 284.1, "end": 287.1, "text": " and, of course, being computer graphics researchers"}, {"start": 287.1, "end": 290.1, "text": " throws some blocks into the mix for good measure."}, {"start": 290.1, "end": 292.1, "text": " It performs beautifully."}, {"start": 292.1, "end": 294.1, "text": " This is so realistic."}, {"start": 294.1, "end": 298.1, "text": " We can also add sensors to the agent to allow them to navigate"}, {"start": 298.1, "end": 301.1, "text": " in this virtual world in a realistic manner."}, {"start": 301.1, "end": 304.1, "text": " Just a note on how remarkable this is."}, {"start": 304.1, "end": 307.1, "text": " So, this quadruped behaves according to physics,"}, {"start": 307.1, "end": 312.1, "text": " lets us control it with the controller, which is already somewhat of a contradiction."}, {"start": 312.1, "end": 317.1, "text": " And it is robust against these perturbations at the same time."}, {"start": 317.1, "end": 322.1, "text": " This is absolute witchcraft and no doubt it has earned to be accepted to C-Graph"}, {"start": 322.1, "end": 327.1, "text": " which is perhaps the most prestigious research venue in computer graphics."}, {"start": 327.1, "end": 329.1, "text": " Congratulations!"}, {"start": 329.1, "end": 334.1, "text": " What you see here is an instrumentation for a previous paper that we covered in this series"}, {"start": 334.1, "end": 337.1, "text": " which was made by weights and biases."}, {"start": 337.1, "end": 342.1, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 342.1, "end": 346.1, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 346.1, "end": 350.1, "text": " Their system is designed to save you a ton of time and money"}, {"start": 350.1, "end": 356.1, "text": " and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research,"}, {"start": 356.1, "end": 358.1, "text": " GitHub and more."}, {"start": 358.1, "end": 363.1, "text": " And the best part is that if you are an academic or have an open source project"}, {"start": 363.1, "end": 365.1, "text": " you can use their tools for free."}, {"start": 365.1, "end": 367.1, "text": " It really is as good as it gets."}, {"start": 367.1, "end": 371.1, "text": " Make sure to visit them through www.nb.com slash papers"}, {"start": 371.1, "end": 374.1, "text": " or just click the link in the video description"}, {"start": 374.1, "end": 376.1, "text": " and you can get the free demo today."}, {"start": 376.1, "end": 379.1, "text": " Our thanks to weights and biases for their long standing support"}, {"start": 379.1, "end": 382.1, "text": " and for helping us make better videos for you."}, {"start": 382.1, "end": 384.1, "text": " Thanks for watching and for your generous support"}, {"start": 384.1, "end": 386.1, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=b8sCSumMUvM
Is Style Transfer For Fluid Simulations Possible? 🌊
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their showcased post is available here: https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI 📝 The paper "Lagrangian Neural Style Transfer for Fluids" is available here: http://www.byungsoo.me/project/lnst/index.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karojeone Ifehir. Style Transfer is a technique in machine learning research where we have two input images, one for content and one for style, and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera, and the style can be a painting which leads to the super fun results that you see here. An earlier paper had shown that the more sophisticated ones can sometimes make even art curators think that they are real. This previous work blew me away as it could perform style transfer for smoke simulations. I almost fell out of the chair when I have first seen these results. It could do fire textures, starry night, you name it. It seems that it is able to do anything we can think of. Now let me try to explain two things. One, why is this so difficult? And two, the results are really good, so are there any shortcomings? Doing this for smoke simulations is a big departure from 2D style transfer because that takes an image where this works in 3D and does not deal with images, but with density fields. A density field means a collection of numbers that describe how dense a smoke plume is at a given spatial position. It is a physical description of a smoke plume if you will. So how could we possibly apply artistic style from an image to a collection of densities? The solution in this earlier paper was to first down sample the density field to a core-ser version, perform the style transfer there, and up sample this density field again with already existing techniques. This technique was called transport-based neural style transfer TNST in short, please remember this. Now let's look at some results from this technique. This is what our simulation would look like normally, and then all we have to do is show this image to the simulator and what does it do with it? Wow, my goodness, just look at those heavenly patterns. So what does today's new follow-up work offer to us that the previous one doesn't? How can this seemingly nearly perfect technique be improved? Well, this new work takes an even more brazen vantage point to this question. If style transfer on density fields is hard, then try a different representation. The title of the paper says Lagrangian-style neural style transfer. So what does that mean? It means particles. This was made for particle-based simulations which comes with several advantages. One, because the styles are now attached to particles, we can choose different styles for different smoke plumes, and they will remember what style they are supposed to follow. Because of this advantageous property, we can even ask the particles to change their styles over time, creating these heavenly animations. In these 2D examples, you also see how the texture of the simulation evolves over time, and that the elements of the style are really propagated to the surface and the style indeed follows how the fluid changes. This is true even if we mix these styles together. Two, it not only provides us these high-quality results, but it is fast. And by this, I mean blazing fast. You see, we talked about TNST, the transport-based technique, approximately 7 months ago, and in this series, I always note that 2 more papers down the line, and it will be much, much faster. So here's the 2 minute papers moment of truth. What do the timing say? For the previous technique, it said more than 1D. What could that 1D mean? Oh goodness, that thing took an entire day to compute. So, what about the new one? What? Really? Just 1 hour? That is insanity. So, how detailed of a simulation are we talking about? Let's have a look together. M-slash-f means minutes per frame, and as you see, if we have tens of thousands of particles, we have 0.05, or in other words, 3 seconds per frame, and we can go up to hundreds of thousands, or even millions of particles, and end up around 30 seconds per frame. Loving it. Artists are going to do miracles with this technique, I am sure. The next step is likely going to be a real-time algorithm, which may appear as soon as 1 or 2 more works down the line, and you can bet your papers that I'll be here to cover it. So, make sure to subscribe and hit the bell icon to not miss it when it appears. The speed of progress in computer graphics research is nothing short of amazing. Also, make sure to have a look at the full paper in the video description, not only because it is a beautiful paper, and also a lot of fun to read, but because you will also know what this regularization step here does exactly to the simulation. This episode has been supported by weights and biases. In this post, they show you how to connect their system to the hugging face library and how to generate tweets in the style of your favorite people. You can even try an example in an interactive notebook through the link in the video description. Wates and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojeone Ifehir."}, {"start": 4.4, "end": 10.0, "text": " Style Transfer is a technique in machine learning research where we have two input images,"}, {"start": 10.0, "end": 17.6, "text": " one for content and one for style, and the output is our content image reimagined with this new style."}, {"start": 17.6, "end": 22.400000000000002, "text": " The cool part is that the content can be a photo straight from our camera,"}, {"start": 22.400000000000002, "end": 27.6, "text": " and the style can be a painting which leads to the super fun results that you see here."}, {"start": 27.6, "end": 36.400000000000006, "text": " An earlier paper had shown that the more sophisticated ones can sometimes make even art curators think that they are real."}, {"start": 36.400000000000006, "end": 42.400000000000006, "text": " This previous work blew me away as it could perform style transfer for smoke simulations."}, {"start": 42.400000000000006, "end": 46.6, "text": " I almost fell out of the chair when I have first seen these results."}, {"start": 46.6, "end": 51.2, "text": " It could do fire textures, starry night, you name it."}, {"start": 51.2, "end": 55.2, "text": " It seems that it is able to do anything we can think of."}, {"start": 55.2, "end": 58.400000000000006, "text": " Now let me try to explain two things."}, {"start": 58.400000000000006, "end": 60.800000000000004, "text": " One, why is this so difficult?"}, {"start": 60.800000000000004, "end": 66.4, "text": " And two, the results are really good, so are there any shortcomings?"}, {"start": 66.4, "end": 71.2, "text": " Doing this for smoke simulations is a big departure from 2D style transfer"}, {"start": 71.2, "end": 76.80000000000001, "text": " because that takes an image where this works in 3D and does not deal with images,"}, {"start": 76.80000000000001, "end": 78.80000000000001, "text": " but with density fields."}, {"start": 78.80000000000001, "end": 84.4, "text": " A density field means a collection of numbers that describe how dense a smoke plume is"}, {"start": 84.4, "end": 86.4, "text": " at a given spatial position."}, {"start": 86.4, "end": 89.60000000000001, "text": " It is a physical description of a smoke plume if you will."}, {"start": 89.60000000000001, "end": 96.0, "text": " So how could we possibly apply artistic style from an image to a collection of densities?"}, {"start": 96.0, "end": 102.4, "text": " The solution in this earlier paper was to first down sample the density field to a core-ser version,"}, {"start": 102.4, "end": 109.60000000000001, "text": " perform the style transfer there, and up sample this density field again with already existing techniques."}, {"start": 109.6, "end": 115.19999999999999, "text": " This technique was called transport-based neural style transfer TNST in short,"}, {"start": 115.19999999999999, "end": 116.8, "text": " please remember this."}, {"start": 116.8, "end": 120.0, "text": " Now let's look at some results from this technique."}, {"start": 120.0, "end": 122.8, "text": " This is what our simulation would look like normally,"}, {"start": 122.8, "end": 130.0, "text": " and then all we have to do is show this image to the simulator and what does it do with it?"}, {"start": 130.0, "end": 135.2, "text": " Wow, my goodness, just look at those heavenly patterns."}, {"start": 135.2, "end": 140.79999999999998, "text": " So what does today's new follow-up work offer to us that the previous one doesn't?"}, {"start": 140.79999999999998, "end": 145.2, "text": " How can this seemingly nearly perfect technique be improved?"}, {"start": 145.2, "end": 150.0, "text": " Well, this new work takes an even more brazen vantage point to this question."}, {"start": 150.0, "end": 156.39999999999998, "text": " If style transfer on density fields is hard, then try a different representation."}, {"start": 156.39999999999998, "end": 160.79999999999998, "text": " The title of the paper says Lagrangian-style neural style transfer."}, {"start": 160.79999999999998, "end": 162.39999999999998, "text": " So what does that mean?"}, {"start": 162.39999999999998, "end": 164.39999999999998, "text": " It means particles."}, {"start": 164.4, "end": 169.20000000000002, "text": " This was made for particle-based simulations which comes with several advantages."}, {"start": 169.20000000000002, "end": 173.20000000000002, "text": " One, because the styles are now attached to particles,"}, {"start": 173.20000000000002, "end": 176.8, "text": " we can choose different styles for different smoke plumes,"}, {"start": 176.8, "end": 182.8, "text": " and they will remember what style they are supposed to follow."}, {"start": 182.8, "end": 187.20000000000002, "text": " Because of this advantageous property, we can even ask the particles"}, {"start": 187.20000000000002, "end": 193.6, "text": " to change their styles over time, creating these heavenly animations."}, {"start": 193.6, "end": 199.6, "text": " In these 2D examples, you also see how the texture of the simulation evolves over time,"}, {"start": 199.6, "end": 204.6, "text": " and that the elements of the style are really propagated to the surface"}, {"start": 204.6, "end": 208.4, "text": " and the style indeed follows how the fluid changes."}, {"start": 208.4, "end": 212.0, "text": " This is true even if we mix these styles together."}, {"start": 214.0, "end": 219.6, "text": " Two, it not only provides us these high-quality results, but it is fast."}, {"start": 219.6, "end": 222.4, "text": " And by this, I mean blazing fast."}, {"start": 222.4, "end": 228.4, "text": " You see, we talked about TNST, the transport-based technique, approximately 7 months ago,"}, {"start": 228.4, "end": 232.4, "text": " and in this series, I always note that 2 more papers down the line,"}, {"start": 232.4, "end": 235.20000000000002, "text": " and it will be much, much faster."}, {"start": 235.20000000000002, "end": 238.4, "text": " So here's the 2 minute papers moment of truth."}, {"start": 238.4, "end": 240.4, "text": " What do the timing say?"}, {"start": 240.4, "end": 244.4, "text": " For the previous technique, it said more than 1D."}, {"start": 244.4, "end": 246.4, "text": " What could that 1D mean?"}, {"start": 246.4, "end": 250.8, "text": " Oh goodness, that thing took an entire day to compute."}, {"start": 250.8, "end": 252.8, "text": " So, what about the new one?"}, {"start": 252.8, "end": 254.8, "text": " What? Really?"}, {"start": 254.8, "end": 256.8, "text": " Just 1 hour?"}, {"start": 256.8, "end": 258.8, "text": " That is insanity."}, {"start": 258.8, "end": 262.8, "text": " So, how detailed of a simulation are we talking about?"}, {"start": 262.8, "end": 264.8, "text": " Let's have a look together."}, {"start": 264.8, "end": 270.8, "text": " M-slash-f means minutes per frame, and as you see, if we have tens of thousands of particles,"}, {"start": 270.8, "end": 274.8, "text": " we have 0.05, or in other words, 3 seconds per frame,"}, {"start": 274.8, "end": 279.8, "text": " and we can go up to hundreds of thousands, or even millions of particles,"}, {"start": 279.8, "end": 283.8, "text": " and end up around 30 seconds per frame."}, {"start": 283.8, "end": 285.8, "text": " Loving it."}, {"start": 285.8, "end": 287.8, "text": " Artists are going to do miracles with this technique, I am sure."}, {"start": 287.8, "end": 291.8, "text": " The next step is likely going to be a real-time algorithm,"}, {"start": 291.8, "end": 295.8, "text": " which may appear as soon as 1 or 2 more works down the line,"}, {"start": 295.8, "end": 298.8, "text": " and you can bet your papers that I'll be here to cover it."}, {"start": 298.8, "end": 303.8, "text": " So, make sure to subscribe and hit the bell icon to not miss it when it appears."}, {"start": 303.8, "end": 307.8, "text": " The speed of progress in computer graphics research is nothing short of amazing."}, {"start": 307.8, "end": 311.8, "text": " Also, make sure to have a look at the full paper in the video description,"}, {"start": 311.8, "end": 316.8, "text": " not only because it is a beautiful paper, and also a lot of fun to read,"}, {"start": 316.8, "end": 322.8, "text": " but because you will also know what this regularization step here does exactly to the simulation."}, {"start": 322.8, "end": 326.8, "text": " This episode has been supported by weights and biases."}, {"start": 326.8, "end": 331.8, "text": " In this post, they show you how to connect their system to the hugging face library"}, {"start": 331.8, "end": 335.8, "text": " and how to generate tweets in the style of your favorite people."}, {"start": 335.8, "end": 340.8, "text": " You can even try an example in an interactive notebook through the link in the video description."}, {"start": 340.8, "end": 345.8, "text": " Wates and biases provide tools to track your experiments in your deep learning projects."}, {"start": 345.8, "end": 348.8, "text": " Their system is designed to save you a ton of time and money,"}, {"start": 348.8, "end": 353.8, "text": " and it is actively used in projects at prestigious labs such as OpenAI,"}, {"start": 353.8, "end": 356.8, "text": " Toyota Research, GitHub, and more."}, {"start": 356.8, "end": 361.8, "text": " And the best part is that if you are an academic or have an open source project,"}, {"start": 361.8, "end": 363.8, "text": " you can use their tools for free."}, {"start": 363.8, "end": 366.8, "text": " It really is as good as it gets."}, {"start": 366.8, "end": 369.8, "text": " Make sure to visit them through wnb.com slash papers,"}, {"start": 369.8, "end": 372.8, "text": " or just click the link in the video description,"}, {"start": 372.8, "end": 374.8, "text": " and you can get a free demo today."}, {"start": 374.8, "end": 377.8, "text": " Our thanks to weights and biases for their longstanding support"}, {"start": 377.8, "end": 380.8, "text": " and for helping us make better videos for you."}, {"start": 380.8, "end": 382.8, "text": " Thanks for watching and for your generous support,"}, {"start": 382.8, "end": 393.8, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=8GVHuGCH2eM
DeepMind’s New AI Helps Detecting Breast Cancer
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "International evaluation of an AI system for breast cancer screening" is available here: https://deepmind.com/research/publications/International-evaluation-of-an-artificial-intelligence-system-to-identify-breast-cancer-in-screening-mammography ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Rob Rowe, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karri Zsolnai-Fehir. These days, we see so many amazing uses for learning based algorithms from enhancing computer animations, teaching virtual animals to walk, to teaching self-driving cars, depth perception, and more. It truly feels like no field of science is left untouched by these new techniques, including the medical sciences. You see, in medical imaging, a common problem is that we have so many diagnostic images out there in the world that it makes it more and more infeasible for doctors to look at all of them. What you see here is a work from scientists at Deep Mind Health that we covered a few hundred episodes ago. The training part takes about 14,000 optical coherence tomography scans. This is the OCT label that you see on the left. These images are cross sections of the human retina. We first start out with this OCT scan, then a manual segmentation step follows where a doctor marks up this image to show where the relevant parts, like the retinal fluids, or the elevations of the retinal pigments are. After the learning process, this method can reproduce these segmentations really well by itself without the doctor's supervision, and you see here that the two images are almost identical in these tests. Now that we have the segmentation map, it is time to perform classification. This means that we look at this map and assign a probability to each possible condition that may be present. Finally, based on these, a final verdict is made whether the patient needs to be urgently seen, or just a routine check, or perhaps no check is required. This was an absolutely incredible piece of work. However, it is of utmost importance to evaluate these tools together with experienced doctors and hopefully on international datasets. Since then, in this new work, Deep Mind has knocked the evaluation out of the park for a system they developed to detect breast cancer as early as possible. Let's briefly talk about the technique, and then I'll try to explain why it is sinfully difficult to evaluate it properly. So, onto the new problem. These mammograms contain four images that show the breasts from two different angles, and the goal is to predict whether the biopsy taken later will be positive for cancer or not. This is especially important because early detection is key for treating these patients, and the key question is how does it compare to the experts? Have a look here. This is a case of cancer that was missed by all six experts in the study, but was correctly identified by the AI. And what about this one? This case didn't work so well. It was caught by all six experts, but was missed by the AI. So, one reassuring sample, and one failed sample. And with this, we have arrived to the central thesis of the paper, which asks the question, what does it really take to say that an AI system surpassed human experts? To even have a fighting chance in tackling this, we have to measure false positives and false negatives. The false positive means that the AI mistakenly predicts that the sample is positive, when in reality it is negative. The false negative means that the AI thinks that the sample is negative, whereas it is positive in reality. The key is that in every decision domain, the permissible rates for false negatives and positives is different. Let me try to explain this through this example. In cancer detection, if we have a sick patient who gets classified as healthy, is a grave mistake that can lead to serious consequences. But if we have a healthy patient who is misclassified as sick, the positive cases get a second look from a doctor who can easily identify the mistake. The consequences, in this case, are much less problematic and can be remedied by spending a little time checking the samples that the AI was less confident about. The bottom line is that there are many different ways to interpret the data, and it is by no means trivial to find out which one is the right way to do so. And now, hold on to your papers because here comes the best part. If we compare the predictions of the AI to the human experts, we see that the false positive cases in the US have been reduced by 5.7%. While the false negative cases have been reduced by 9.7%. That is the holy grail. We don't need to consider the cost of false positives or negatives here because it reduced false positives and false negatives at the same time. Spectacular. Another important detail is that these numbers came out of an independent evaluation. It means that the results did not come from the scientists who wrote the algorithm and have been thoroughly checked by independent experts who have no vested interest in this project. This is the reason why you see so many authors on this paper. Excellent. Another interesting tidbit is that the AI was trained on subjects from the UK and the question was how well does this knowledge generalize for subjects from other places, for instance, the United States. Is this UK knowledge reusable in the US? I have been quite surprised by the answer because it never saw a sample from anyone in the US and still did better than the experts on US data. This is a very reassuring property and I hope to see some more studies that show how general the knowledge is that these systems are able to obtain through training and perhaps the most important. If you remember one thing from this video, let it be the following. This work much like other AI infused medical solutions are not made to replace human doctors. The goal is instead to empower them and take off as much weight from their shoulders as possible. We have hard numbers for this as the results concluded that this work reduces this workload of the doctors by 88% which is an incredible result. Among other far-eaching consequences, I would like to mention that this would substantially help not only the work of doctors in a wealthier, more developed countries but it may single-handedly enable proper cancer detections in more developing countries who cannot afford to check these scans. And note that in this video we truly have just scratched the surface, whatever we talk about here in a few minutes cannot be a description as rigorous and accurate as the paper itself, so make sure to check it out in the video description. And with that, I hope you now have a good feel of the pace of progress in machine learning research. The Retina Fluid Project was state of the art in 2018 and now less than two years later we have a proper independently evaluated AI-based detection for breast cancer. Bravo deep-mind! What a time to be alive! This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Unlike entry-level hosting services, Linode gives you full back-end access to your server, which is your step-up to powerful, fast, fully configurable cloud computing. Linode also has one-click apps that streamline your ability to deploy websites, personal VPNs, game servers, and more. If you need something as small as a personal online portfolio, Linode has your back and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers or click the link in the video description and give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karri Zsolnai-Fehir."}, {"start": 4.48, "end": 10.32, "text": " These days, we see so many amazing uses for learning based algorithms from enhancing computer"}, {"start": 10.32, "end": 17.44, "text": " animations, teaching virtual animals to walk, to teaching self-driving cars, depth perception,"}, {"start": 17.44, "end": 23.28, "text": " and more. It truly feels like no field of science is left untouched by these new techniques,"}, {"start": 23.28, "end": 29.6, "text": " including the medical sciences. You see, in medical imaging, a common problem is that we have"}, {"start": 29.6, "end": 35.04, "text": " so many diagnostic images out there in the world that it makes it more and more infeasible for"}, {"start": 35.04, "end": 40.96, "text": " doctors to look at all of them. What you see here is a work from scientists at Deep Mind Health"}, {"start": 40.96, "end": 46.8, "text": " that we covered a few hundred episodes ago. The training part takes about 14,000 optical"}, {"start": 46.8, "end": 53.28, "text": " coherence tomography scans. This is the OCT label that you see on the left. These images are cross"}, {"start": 53.28, "end": 60.480000000000004, "text": " sections of the human retina. We first start out with this OCT scan, then a manual segmentation"}, {"start": 60.480000000000004, "end": 66.64, "text": " step follows where a doctor marks up this image to show where the relevant parts, like the retinal"}, {"start": 66.64, "end": 72.88, "text": " fluids, or the elevations of the retinal pigments are. After the learning process, this method"}, {"start": 72.88, "end": 79.04, "text": " can reproduce these segmentations really well by itself without the doctor's supervision,"}, {"start": 79.04, "end": 85.28, "text": " and you see here that the two images are almost identical in these tests. Now that we have the"}, {"start": 85.28, "end": 92.24000000000001, "text": " segmentation map, it is time to perform classification. This means that we look at this map and assign"}, {"start": 92.24000000000001, "end": 98.88000000000001, "text": " a probability to each possible condition that may be present. Finally, based on these, a final"}, {"start": 98.88000000000001, "end": 104.72, "text": " verdict is made whether the patient needs to be urgently seen, or just a routine check,"}, {"start": 104.72, "end": 111.52, "text": " or perhaps no check is required. This was an absolutely incredible piece of work. However,"}, {"start": 111.52, "end": 117.68, "text": " it is of utmost importance to evaluate these tools together with experienced doctors and hopefully"}, {"start": 117.68, "end": 124.0, "text": " on international datasets. Since then, in this new work, Deep Mind has knocked the evaluation"}, {"start": 124.0, "end": 130.0, "text": " out of the park for a system they developed to detect breast cancer as early as possible."}, {"start": 130.0, "end": 134.72, "text": " Let's briefly talk about the technique, and then I'll try to explain why it is"}, {"start": 134.72, "end": 141.52, "text": " sinfully difficult to evaluate it properly. So, onto the new problem. These mammograms contain"}, {"start": 141.52, "end": 146.96, "text": " four images that show the breasts from two different angles, and the goal is to predict whether"}, {"start": 146.96, "end": 153.76, "text": " the biopsy taken later will be positive for cancer or not. This is especially important because"}, {"start": 153.76, "end": 160.07999999999998, "text": " early detection is key for treating these patients, and the key question is how does it compare to"}, {"start": 160.07999999999998, "end": 166.16, "text": " the experts? Have a look here. This is a case of cancer that was missed by all six experts in the"}, {"start": 166.16, "end": 173.68, "text": " study, but was correctly identified by the AI. And what about this one? This case didn't work so"}, {"start": 173.68, "end": 180.88, "text": " well. It was caught by all six experts, but was missed by the AI. So, one reassuring sample,"}, {"start": 180.88, "end": 186.56, "text": " and one failed sample. And with this, we have arrived to the central thesis of the paper,"}, {"start": 186.56, "end": 194.07999999999998, "text": " which asks the question, what does it really take to say that an AI system surpassed human experts?"}, {"start": 194.07999999999998, "end": 200.24, "text": " To even have a fighting chance in tackling this, we have to measure false positives and false"}, {"start": 200.24, "end": 206.88, "text": " negatives. The false positive means that the AI mistakenly predicts that the sample is positive,"}, {"start": 206.88, "end": 213.12, "text": " when in reality it is negative. The false negative means that the AI thinks that the sample is"}, {"start": 213.12, "end": 219.6, "text": " negative, whereas it is positive in reality. The key is that in every decision domain, the"}, {"start": 219.6, "end": 225.51999999999998, "text": " permissible rates for false negatives and positives is different. Let me try to explain this through"}, {"start": 225.51999999999998, "end": 231.28, "text": " this example. In cancer detection, if we have a sick patient who gets classified as healthy,"}, {"start": 231.28, "end": 237.6, "text": " is a grave mistake that can lead to serious consequences. But if we have a healthy patient who is"}, {"start": 237.6, "end": 243.76, "text": " misclassified as sick, the positive cases get a second look from a doctor who can easily"}, {"start": 243.76, "end": 249.28, "text": " identify the mistake. The consequences, in this case, are much less problematic and can be"}, {"start": 249.28, "end": 254.8, "text": " remedied by spending a little time checking the samples that the AI was less confident about."}, {"start": 255.44, "end": 260.88, "text": " The bottom line is that there are many different ways to interpret the data, and it is by no"}, {"start": 260.88, "end": 267.04, "text": " means trivial to find out which one is the right way to do so. And now, hold on to your papers"}, {"start": 267.04, "end": 273.52, "text": " because here comes the best part. If we compare the predictions of the AI to the human experts,"}, {"start": 273.52, "end": 279.2, "text": " we see that the false positive cases in the US have been reduced by 5.7%."}, {"start": 280.56, "end": 289.2, "text": " While the false negative cases have been reduced by 9.7%. That is the holy grail. We don't need to"}, {"start": 289.2, "end": 295.2, "text": " consider the cost of false positives or negatives here because it reduced false positives and false"}, {"start": 295.2, "end": 302.4, "text": " negatives at the same time. Spectacular. Another important detail is that these numbers came out"}, {"start": 302.4, "end": 307.91999999999996, "text": " of an independent evaluation. It means that the results did not come from the scientists who"}, {"start": 307.91999999999996, "end": 313.28, "text": " wrote the algorithm and have been thoroughly checked by independent experts who have no vested"}, {"start": 313.28, "end": 319.35999999999996, "text": " interest in this project. This is the reason why you see so many authors on this paper. Excellent."}, {"start": 319.91999999999996, "end": 325.84, "text": " Another interesting tidbit is that the AI was trained on subjects from the UK and the question"}, {"start": 325.84, "end": 331.91999999999996, "text": " was how well does this knowledge generalize for subjects from other places, for instance,"}, {"start": 331.91999999999996, "end": 338.79999999999995, "text": " the United States. Is this UK knowledge reusable in the US? I have been quite surprised by the"}, {"start": 338.8, "end": 345.6, "text": " answer because it never saw a sample from anyone in the US and still did better than the experts"}, {"start": 345.6, "end": 352.24, "text": " on US data. This is a very reassuring property and I hope to see some more studies that show"}, {"start": 352.24, "end": 358.24, "text": " how general the knowledge is that these systems are able to obtain through training and perhaps"}, {"start": 358.24, "end": 363.44, "text": " the most important. If you remember one thing from this video, let it be the following."}, {"start": 363.44, "end": 370.8, "text": " This work much like other AI infused medical solutions are not made to replace human doctors."}, {"start": 370.8, "end": 377.84, "text": " The goal is instead to empower them and take off as much weight from their shoulders as possible."}, {"start": 377.84, "end": 383.52, "text": " We have hard numbers for this as the results concluded that this work reduces this workload of"}, {"start": 383.52, "end": 391.12, "text": " the doctors by 88% which is an incredible result. Among other far-eaching consequences, I would"}, {"start": 391.12, "end": 395.92, "text": " like to mention that this would substantially help not only the work of doctors in"}, {"start": 395.92, "end": 400.96, "text": " a wealthier, more developed countries but it may single-handedly enable proper cancer"}, {"start": 400.96, "end": 405.52, "text": " detections in more developing countries who cannot afford to check these scans."}, {"start": 405.52, "end": 411.12, "text": " And note that in this video we truly have just scratched the surface, whatever we talk about here"}, {"start": 411.12, "end": 416.72, "text": " in a few minutes cannot be a description as rigorous and accurate as the paper itself,"}, {"start": 416.72, "end": 422.16, "text": " so make sure to check it out in the video description. And with that, I hope you now have a good"}, {"start": 422.16, "end": 427.76000000000005, "text": " feel of the pace of progress in machine learning research. The Retina Fluid Project was state of the"}, {"start": 427.76000000000005, "end": 435.92, "text": " art in 2018 and now less than two years later we have a proper independently evaluated AI-based"}, {"start": 435.92, "end": 442.40000000000003, "text": " detection for breast cancer. Bravo deep-mind! What a time to be alive! This episode has been"}, {"start": 442.4, "end": 447.67999999999995, "text": " supported by Linode. Linode is the world's largest independent cloud computing provider."}, {"start": 447.67999999999995, "end": 452.88, "text": " Unlike entry-level hosting services, Linode gives you full back-end access to your server,"}, {"start": 452.88, "end": 458.0, "text": " which is your step-up to powerful, fast, fully configurable cloud computing."}, {"start": 458.0, "end": 462.88, "text": " Linode also has one-click apps that streamline your ability to deploy websites,"}, {"start": 462.88, "end": 468.88, "text": " personal VPNs, game servers, and more. If you need something as small as a personal"}, {"start": 468.88, "end": 474.56, "text": " online portfolio, Linode has your back and if you need to manage tons of clients' websites"}, {"start": 474.56, "end": 480.48, "text": " and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer"}, {"start": 480.48, "end": 487.52, "text": " affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI,"}, {"start": 487.52, "end": 493.92, "text": " scientific computing, and computer graphics projects. If only I had access to a tool like this"}, {"start": 493.92, "end": 500.24, "text": " while I was working on my last few papers. To receive $20 in credit on your new Linode account,"}, {"start": 500.24, "end": 506.72, "text": " visit linode.com slash papers or click the link in the video description and give it a try today."}, {"start": 506.72, "end": 511.68, "text": " Our thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 511.68, "end": 526.24, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=qk4cz0B5kK0
Can We Make An Image Synthesis AI Controllable?
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers The showcased post is available here: https://app.wandb.ai/lavanyashukla/visualize-sklearn/reports/Visualize-Sklearn-Model-Performance--Vmlldzo0ODIzNg 📝 The paper "Semantically Multi-modal Image Synthesis" is available here: https://seanseattle.github.io/SMIS/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Rob Rowe, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karri Zonai-Fehir. Not so long ago, we talked about a neural image generator that was able to dream up beautiful natural scenes. It had a killer feature where it would take as an input not only the image itself, but the label layout of this image as well. That is, a gold mine of information and including this, indeed, opens up a killer application. Look, we can even change the scene around by modifying the labels on this layout, for instance, by adding some mountains, make a grassy field, and add the lake. Making a scene from scratch from a simple starting point was also possible with this technique. This is already a powerful learning-based tool for artists to use as is, but can we go further? For instance, would it be possible to choose exactly what to fill these regions with? And this is what today's paper exiles it, and it turns out it can do much, much more. Let's dive in. One, we can provide it this layout, which they refer to as a semantic mask, and it can synthesize clothes, pants, and hair in many, many different ways. Heavenly. If you have a closer look, you see that fortunately, it doesn't seem to change any other parts of the image. Nothing too crazy here, but please remember this, and now would be a good time to hold onto your papers, because, too, it can change the sky or the material properties of the floor. And, wait, are you seeing what I am seeing? We cannot just change the sky, because we have a lake there, reflecting it, therefore the lake has to change, too. Does it? Yes, it does. It indeed changes other parts of the image when it is necessary, which is a hallmark of a learning algorithm that truly understands what it is synthesizing. You can see this effect, especially clearly, at the end of the looped footage, when the sky is the brightest. Loving it. So, what about the floor? This is one of my favorites. It doesn't just change the color of the floor itself, but it performs proper material modeling. Look, the reflections also become glossier over time. A proper light transport simulation for this scenario would take a very, very long time, we are likely talking from minutes to hours. And this thing has never been taught about light transport and learned about these materials by itself. Make no mistake, these may be low resolution, pixelated images, but this still feels like science fiction. Two more papers down the line, and we will see HD videos of this, I am sure. The third application is something that the authors refer to as appearance mixture, where we can essentially select parts of the image to our liking and fuse these selected aspects together into a new image. This could more or less be done with traditional handcrafted methods too, but four, it can also do style morphing, where we start from image A, change it until it looks like image B and back. Now, normally, this can be done very easily with a handcrafted method called image interpolation. However, to make this morphing really work, the tricky part is that all of the intermediate images have to be meaningful. And as you can see, this learning method does a fine job at that. Any of these intermediate images can stand on their own. I try to stop the morphing process at different points so you can have a look and decide for yourself. Let me know in the comments if you agree. I am delighted to see that these image synthesis algorithms are improving at a stunning pace, and I think these tools will rapidly become viable to aid the work of artists in the industry. This episode has been supported by weights and biases. In this post, they show you how to visualize your scikit-learn models with just a few lines of code. Look at all these beautiful visualizations. So good! You can even try an example in an interactive notebook through the link in the video description. Weight and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karri Zonai-Fehir."}, {"start": 4.5200000000000005, "end": 12.280000000000001, "text": " Not so long ago, we talked about a neural image generator that was able to dream up beautiful natural scenes."}, {"start": 12.280000000000001, "end": 18.04, "text": " It had a killer feature where it would take as an input not only the image itself,"}, {"start": 18.04, "end": 21.28, "text": " but the label layout of this image as well."}, {"start": 21.28, "end": 27.8, "text": " That is, a gold mine of information and including this, indeed, opens up a killer application."}, {"start": 27.8, "end": 33.2, "text": " Look, we can even change the scene around by modifying the labels on this layout,"}, {"start": 33.2, "end": 39.6, "text": " for instance, by adding some mountains, make a grassy field, and add the lake."}, {"start": 39.6, "end": 45.2, "text": " Making a scene from scratch from a simple starting point was also possible with this technique."}, {"start": 45.2, "end": 52.6, "text": " This is already a powerful learning-based tool for artists to use as is, but can we go further?"}, {"start": 52.6, "end": 58.2, "text": " For instance, would it be possible to choose exactly what to fill these regions with?"}, {"start": 58.2, "end": 64.2, "text": " And this is what today's paper exiles it, and it turns out it can do much, much more."}, {"start": 64.2, "end": 65.6, "text": " Let's dive in."}, {"start": 65.6, "end": 70.8, "text": " One, we can provide it this layout, which they refer to as a semantic mask,"}, {"start": 70.8, "end": 77.8, "text": " and it can synthesize clothes, pants, and hair in many, many different ways."}, {"start": 77.8, "end": 85.2, "text": " Heavenly. If you have a closer look, you see that fortunately, it doesn't seem to change any other parts of the image."}, {"start": 85.2, "end": 91.6, "text": " Nothing too crazy here, but please remember this, and now would be a good time to hold onto your papers,"}, {"start": 91.6, "end": 97.6, "text": " because, too, it can change the sky or the material properties of the floor."}, {"start": 97.6, "end": 102.0, "text": " And, wait, are you seeing what I am seeing?"}, {"start": 102.0, "end": 109.2, "text": " We cannot just change the sky, because we have a lake there, reflecting it, therefore the lake has to change, too."}, {"start": 109.2, "end": 111.2, "text": " Does it?"}, {"start": 111.2, "end": 116.4, "text": " Yes, it does. It indeed changes other parts of the image when it is necessary,"}, {"start": 116.4, "end": 122.0, "text": " which is a hallmark of a learning algorithm that truly understands what it is synthesizing."}, {"start": 122.0, "end": 128.6, "text": " You can see this effect, especially clearly, at the end of the looped footage, when the sky is the brightest."}, {"start": 128.6, "end": 134.4, "text": " Loving it. So, what about the floor? This is one of my favorites."}, {"start": 134.4, "end": 140.6, "text": " It doesn't just change the color of the floor itself, but it performs proper material modeling."}, {"start": 140.6, "end": 145.0, "text": " Look, the reflections also become glossier over time."}, {"start": 145.0, "end": 150.6, "text": " A proper light transport simulation for this scenario would take a very, very long time,"}, {"start": 150.6, "end": 153.6, "text": " we are likely talking from minutes to hours."}, {"start": 153.6, "end": 160.6, "text": " And this thing has never been taught about light transport and learned about these materials by itself."}, {"start": 160.6, "end": 167.6, "text": " Make no mistake, these may be low resolution, pixelated images, but this still feels like science fiction."}, {"start": 167.6, "end": 172.4, "text": " Two more papers down the line, and we will see HD videos of this, I am sure."}, {"start": 172.4, "end": 178.0, "text": " The third application is something that the authors refer to as appearance mixture,"}, {"start": 178.0, "end": 186.4, "text": " where we can essentially select parts of the image to our liking and fuse these selected aspects together into a new image."}, {"start": 186.4, "end": 191.0, "text": " This could more or less be done with traditional handcrafted methods too,"}, {"start": 191.0, "end": 200.4, "text": " but four, it can also do style morphing, where we start from image A, change it until it looks like image B and back."}, {"start": 200.4, "end": 206.8, "text": " Now, normally, this can be done very easily with a handcrafted method called image interpolation."}, {"start": 206.8, "end": 215.20000000000002, "text": " However, to make this morphing really work, the tricky part is that all of the intermediate images have to be meaningful."}, {"start": 215.20000000000002, "end": 219.20000000000002, "text": " And as you can see, this learning method does a fine job at that."}, {"start": 219.20000000000002, "end": 222.8, "text": " Any of these intermediate images can stand on their own."}, {"start": 222.8, "end": 228.60000000000002, "text": " I try to stop the morphing process at different points so you can have a look and decide for yourself."}, {"start": 228.60000000000002, "end": 231.0, "text": " Let me know in the comments if you agree."}, {"start": 231.0, "end": 237.0, "text": " I am delighted to see that these image synthesis algorithms are improving at a stunning pace,"}, {"start": 237.0, "end": 243.4, "text": " and I think these tools will rapidly become viable to aid the work of artists in the industry."}, {"start": 243.4, "end": 246.8, "text": " This episode has been supported by weights and biases."}, {"start": 246.8, "end": 253.2, "text": " In this post, they show you how to visualize your scikit-learn models with just a few lines of code."}, {"start": 253.2, "end": 256.0, "text": " Look at all these beautiful visualizations."}, {"start": 256.0, "end": 262.4, "text": " So good! You can even try an example in an interactive notebook through the link in the video description."}, {"start": 262.4, "end": 267.0, "text": " Weight and biases provide tools to track your experiments in your deep learning projects."}, {"start": 267.0, "end": 270.4, "text": " Their system is designed to save you a ton of time and money,"}, {"start": 270.4, "end": 278.0, "text": " and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 278.0, "end": 285.2, "text": " And the best part is that if you are an academic or have an open source project, you can use their tools for free."}, {"start": 285.2, "end": 287.59999999999997, "text": " It really is as good as it gets."}, {"start": 287.59999999999997, "end": 291.4, "text": " Make sure to visit them through wnb.com slash papers,"}, {"start": 291.4, "end": 296.2, "text": " or just click the link in the video description, and you can get a free demo today."}, {"start": 296.2, "end": 299.4, "text": " Our thanks to weights and biases for their longstanding support,"}, {"start": 299.4, "end": 302.2, "text": " and for helping us make better videos for you."}, {"start": 302.2, "end": 315.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dJ4rWhpAGFI
DeepMind Made A Superhuman AI For 57 Atari Games! 🕹
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Agent57: Outperforming the Atari Human Benchmark" is available here: https://deepmind.com/blog/article/Agent57-Outperforming-the-human-Atari-benchmark https://arxiv.org/abs/2003.13350 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join Apologies and special thanks to Owen Skarpness!  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Rob Rowe, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #Agent57 #DeepMind
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Naifahir. Between 2013 and 2015, Deep Mind worked on an incredible learning algorithm by the name Deep Reinforcement Learning. This technique, looked at the pixels of the game, was given a controller and played much like a human would, with the exception that it learned to play some Atari games on a superhuman level. I have tried to train it a few years ago and would like to invite you for a marvelous journey to see what happened. When it starts learning to play an old game, Atari Breakout, at first the algorithm loses all of its lives without any science of intelligent action. If we wait a bit, it becomes better at playing the game, roughly matching the skill level of an adapt player. But here's the catch, if we wait for longer, we get something absolutely spectacular. Over time, it learns to play like a pro and finds out that the best way to win the game is digging a tunnel through the bricks and hit them from behind. This technique is a combination of a neural network that processes the visual data that we see on the screen and the reinforcement learner that comes up with the gameplay-related decisions. This is an amazing algorithm, a true breakthrough in AI research. However, it had its own issues. For instance, it did not do well on Montezuma's revenge or pitfall because these games require more long-term planning. Believe it or not, the solution in a follow-up work was to infuse these agents with a very human-like property, curiosity. That agent was able to do much, much better at these games and then got addicted to the TV. But that's a different story. Note that this has been remedied since. And believe it or not, as impossible as it may sound, all of this has been improved significantly. This new work is called Agent 57 and it plays better than humans on all 57 Atari games. Absolute insanity. Let's have a look at it in action and then in a moment I'll try to explain how it does what it does. You see Agent 57 doing really well at the Solaris game here. This space battle game is one of the most impressive games on the Atari as it contains 16 quadrants, 48 sectors, space battles, warp mechanics, pirate ships, fuel management and much more, you name it. This game is not only quite complex but it also is a credit assignment nightmare for an AI to play. This credit assignment problem means that it can happen that we choose an action and we only win or lose hundreds of actions later, leaving us with no idea as to which of our actions led to this win or loss, thus making it difficult to learn from our actions. This Solaris game is a credit assignment nightmare. Let me try to bring this point to life by talking about school. In school when we take an exam we hand it in and the teacher gives us feedback for every single one of our solutions and tells us whether we were correct or not. We know exactly where we did well and what we need to practice to do better next time. Clear, simple, easy. Solaris on the other hand, not so much. If this were a school project, the Solaris game would be a brutal, merciless teacher. Would you like to know your grades? No grades but he tells you that you failed. Well, that's weird. Okay, where did we fail? He won't say. What should we do better next time to improve? You'll figure it out, Bako. Also we wrote this exam 10 weeks ago. Why do we only get to know about the results now? No answer. I think in this case we can conclude that this would be a challenging learning environment even for a motivated human, so just imagine how hard it is for an AI. Hopefully this puts into perspective how incredible it is that Agent 57 performs well on this game. It truly looks like science fiction. To understand what Agent 57 adds to this, it was given something called a meta-controller that can decide when to prioritize short and long-term planning. On the short term, we typically have mechanical challenges like avoiding a skull in Montezuma's revenge or dodging the shots of an enemy ship in Solaris. The long-term part is also necessary to explore new parts of the game and have a good strategic plan to eventually win the game. This is great because this new technique can now deal with the brutal and merciless teacher who we just introduced. Alternatively, this agent can be thought of as someone who has a motivation to explore the game and do well at mechanical tasks at the same time and can also prioritize these tasks. With this, for the first time, scientists at DeepMind found a learning algorithm that exceeds human performance on all 57 Atari games. And please do not forget about the fact that DeepMind tries to solve general intelligence and then use general intelligence to solve everything else. This is their holy grail. In other words, they are seeking an algorithm that can learn by itself and achieve human-like performance on a variety of tasks. There is still plenty to do, but we are now one step closer to that. If you learn only one thing from this video, let it be the fact that there are not 57 different methods but one general algorithm that plays 57 games better than humans. What a time to be alive. I would like to show you a short message from a few days ago that melted my heart. This I got from Nathan who has been inspired by these incredible works and he decided to turn his life around and go back to study more. I love my job and reading messages like this is one of the absolute best parts of it. Congratulations Nathan and note that you can take this inspiration and greatness can materialize in every aspect of life not only in computer graphics or machine learning research. Good luck. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally hold onto your papers because the Lambda GPU cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Naifahir."}, {"start": 4.8, "end": 11.82, "text": " Between 2013 and 2015, Deep Mind worked on an incredible learning algorithm by the name"}, {"start": 11.82, "end": 13.8, "text": " Deep Reinforcement Learning."}, {"start": 13.8, "end": 19.28, "text": " This technique, looked at the pixels of the game, was given a controller and played much"}, {"start": 19.28, "end": 24.94, "text": " like a human would, with the exception that it learned to play some Atari games on a"}, {"start": 24.94, "end": 26.54, "text": " superhuman level."}, {"start": 26.54, "end": 31.32, "text": " I have tried to train it a few years ago and would like to invite you for a marvelous"}, {"start": 31.32, "end": 33.7, "text": " journey to see what happened."}, {"start": 33.7, "end": 39.7, "text": " When it starts learning to play an old game, Atari Breakout, at first the algorithm loses"}, {"start": 39.7, "end": 43.78, "text": " all of its lives without any science of intelligent action."}, {"start": 43.78, "end": 49.26, "text": " If we wait a bit, it becomes better at playing the game, roughly matching the skill level"}, {"start": 49.26, "end": 51.14, "text": " of an adapt player."}, {"start": 51.14, "end": 57.46, "text": " But here's the catch, if we wait for longer, we get something absolutely spectacular."}, {"start": 57.46, "end": 62.86, "text": " Over time, it learns to play like a pro and finds out that the best way to win the game"}, {"start": 62.86, "end": 67.34, "text": " is digging a tunnel through the bricks and hit them from behind."}, {"start": 67.34, "end": 71.94, "text": " This technique is a combination of a neural network that processes the visual data that"}, {"start": 71.94, "end": 77.54, "text": " we see on the screen and the reinforcement learner that comes up with the gameplay-related"}, {"start": 77.54, "end": 79.02, "text": " decisions."}, {"start": 79.02, "end": 83.25999999999999, "text": " This is an amazing algorithm, a true breakthrough in AI research."}, {"start": 83.25999999999999, "end": 86.25999999999999, "text": " However, it had its own issues."}, {"start": 86.25999999999999, "end": 92.02, "text": " For instance, it did not do well on Montezuma's revenge or pitfall because these games require"}, {"start": 92.02, "end": 94.38, "text": " more long-term planning."}, {"start": 94.38, "end": 99.94, "text": " Believe it or not, the solution in a follow-up work was to infuse these agents with a very"}, {"start": 99.94, "end": 103.14, "text": " human-like property, curiosity."}, {"start": 103.14, "end": 108.86, "text": " That agent was able to do much, much better at these games and then got addicted to the"}, {"start": 108.86, "end": 109.86, "text": " TV."}, {"start": 109.86, "end": 112.1, "text": " But that's a different story."}, {"start": 112.1, "end": 114.74, "text": " Note that this has been remedied since."}, {"start": 114.74, "end": 121.3, "text": " And believe it or not, as impossible as it may sound, all of this has been improved significantly."}, {"start": 121.3, "end": 128.82, "text": " This new work is called Agent 57 and it plays better than humans on all 57 Atari games."}, {"start": 128.82, "end": 130.82, "text": " Absolute insanity."}, {"start": 130.82, "end": 136.18, "text": " Let's have a look at it in action and then in a moment I'll try to explain how it does"}, {"start": 136.18, "end": 137.73999999999998, "text": " what it does."}, {"start": 137.73999999999998, "end": 142.9, "text": " You see Agent 57 doing really well at the Solaris game here."}, {"start": 142.9, "end": 147.94, "text": " This space battle game is one of the most impressive games on the Atari as it contains"}, {"start": 147.94, "end": 155.1, "text": " 16 quadrants, 48 sectors, space battles, warp mechanics, pirate ships, fuel management"}, {"start": 155.1, "end": 157.38, "text": " and much more, you name it."}, {"start": 157.38, "end": 162.78, "text": " This game is not only quite complex but it also is a credit assignment nightmare for an"}, {"start": 162.78, "end": 164.5, "text": " AI to play."}, {"start": 164.5, "end": 169.26, "text": " This credit assignment problem means that it can happen that we choose an action and we"}, {"start": 169.26, "end": 175.66, "text": " only win or lose hundreds of actions later, leaving us with no idea as to which of our"}, {"start": 175.66, "end": 181.85999999999999, "text": " actions led to this win or loss, thus making it difficult to learn from our actions."}, {"start": 181.85999999999999, "end": 185.5, "text": " This Solaris game is a credit assignment nightmare."}, {"start": 185.5, "end": 189.58, "text": " Let me try to bring this point to life by talking about school."}, {"start": 189.58, "end": 195.06, "text": " In school when we take an exam we hand it in and the teacher gives us feedback for every"}, {"start": 195.06, "end": 199.98, "text": " single one of our solutions and tells us whether we were correct or not."}, {"start": 199.98, "end": 205.86, "text": " We know exactly where we did well and what we need to practice to do better next time."}, {"start": 205.86, "end": 208.18, "text": " Clear, simple, easy."}, {"start": 208.18, "end": 211.42000000000002, "text": " Solaris on the other hand, not so much."}, {"start": 211.42, "end": 217.26, "text": " If this were a school project, the Solaris game would be a brutal, merciless teacher."}, {"start": 217.26, "end": 219.14, "text": " Would you like to know your grades?"}, {"start": 219.14, "end": 221.94, "text": " No grades but he tells you that you failed."}, {"start": 221.94, "end": 224.1, "text": " Well, that's weird."}, {"start": 224.1, "end": 226.26, "text": " Okay, where did we fail?"}, {"start": 226.26, "end": 227.7, "text": " He won't say."}, {"start": 227.7, "end": 230.54, "text": " What should we do better next time to improve?"}, {"start": 230.54, "end": 232.14, "text": " You'll figure it out, Bako."}, {"start": 232.14, "end": 235.33999999999997, "text": " Also we wrote this exam 10 weeks ago."}, {"start": 235.33999999999997, "end": 238.57999999999998, "text": " Why do we only get to know about the results now?"}, {"start": 238.57999999999998, "end": 239.66, "text": " No answer."}, {"start": 239.66, "end": 244.74, "text": " I think in this case we can conclude that this would be a challenging learning environment"}, {"start": 244.74, "end": 250.38, "text": " even for a motivated human, so just imagine how hard it is for an AI."}, {"start": 250.38, "end": 256.14, "text": " Hopefully this puts into perspective how incredible it is that Agent 57 performs well on"}, {"start": 256.14, "end": 257.14, "text": " this game."}, {"start": 257.14, "end": 259.9, "text": " It truly looks like science fiction."}, {"start": 259.9, "end": 266.62, "text": " To understand what Agent 57 adds to this, it was given something called a meta-controller"}, {"start": 266.62, "end": 271.9, "text": " that can decide when to prioritize short and long-term planning."}, {"start": 271.9, "end": 277.46, "text": " On the short term, we typically have mechanical challenges like avoiding a skull in Montezuma's"}, {"start": 277.46, "end": 282.22, "text": " revenge or dodging the shots of an enemy ship in Solaris."}, {"start": 282.22, "end": 288.34000000000003, "text": " The long-term part is also necessary to explore new parts of the game and have a good strategic"}, {"start": 288.34000000000003, "end": 291.42, "text": " plan to eventually win the game."}, {"start": 291.42, "end": 296.14, "text": " This is great because this new technique can now deal with the brutal and merciless"}, {"start": 296.14, "end": 298.62, "text": " teacher who we just introduced."}, {"start": 298.62, "end": 303.53999999999996, "text": " Alternatively, this agent can be thought of as someone who has a motivation to explore"}, {"start": 303.53999999999996, "end": 310.9, "text": " the game and do well at mechanical tasks at the same time and can also prioritize these"}, {"start": 310.9, "end": 311.9, "text": " tasks."}, {"start": 311.9, "end": 318.41999999999996, "text": " With this, for the first time, scientists at DeepMind found a learning algorithm that exceeds"}, {"start": 318.41999999999996, "end": 322.62, "text": " human performance on all 57 Atari games."}, {"start": 322.62, "end": 328.06, "text": " And please do not forget about the fact that DeepMind tries to solve general intelligence"}, {"start": 328.06, "end": 332.74, "text": " and then use general intelligence to solve everything else."}, {"start": 332.74, "end": 334.62, "text": " This is their holy grail."}, {"start": 334.62, "end": 340.34000000000003, "text": " In other words, they are seeking an algorithm that can learn by itself and achieve human-like"}, {"start": 340.34000000000003, "end": 343.14, "text": " performance on a variety of tasks."}, {"start": 343.14, "end": 348.22, "text": " There is still plenty to do, but we are now one step closer to that."}, {"start": 348.22, "end": 353.66, "text": " If you learn only one thing from this video, let it be the fact that there are not 57 different"}, {"start": 353.66, "end": 360.46000000000004, "text": " methods but one general algorithm that plays 57 games better than humans."}, {"start": 360.46000000000004, "end": 362.1, "text": " What a time to be alive."}, {"start": 362.1, "end": 367.1, "text": " I would like to show you a short message from a few days ago that melted my heart."}, {"start": 367.1, "end": 372.34000000000003, "text": " This I got from Nathan who has been inspired by these incredible works and he decided to"}, {"start": 372.34000000000003, "end": 375.94000000000005, "text": " turn his life around and go back to study more."}, {"start": 375.94, "end": 381.62, "text": " I love my job and reading messages like this is one of the absolute best parts of it."}, {"start": 381.62, "end": 386.82, "text": " Congratulations Nathan and note that you can take this inspiration and greatness can materialize"}, {"start": 386.82, "end": 392.06, "text": " in every aspect of life not only in computer graphics or machine learning research."}, {"start": 392.06, "end": 393.14, "text": " Good luck."}, {"start": 393.14, "end": 398.18, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 398.18, "end": 400.7, "text": " check out Lambda GPU Cloud."}, {"start": 400.7, "end": 405.46, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 405.46, "end": 409.06, "text": " that they are offering GPU cloud services as well."}, {"start": 409.06, "end": 416.46, "text": " The Lambda GPU cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 416.46, "end": 421.41999999999996, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 421.41999999999996, "end": 427.14, "text": " And finally hold onto your papers because the Lambda GPU cloud costs less than half of"}, {"start": 427.14, "end": 429.65999999999997, "text": " AWS and Azure."}, {"start": 429.65999999999997, "end": 434.9, "text": " Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing"}, {"start": 434.9, "end": 436.78, "text": " GPU instances today."}, {"start": 436.78, "end": 440.41999999999996, "text": " Our thanks to Lambda for helping us make better videos for you."}, {"start": 440.42, "end": 470.38, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ihYsJpibNRU
Now We Can Relight Paintings…and Turns Out, Photos Too! 🎨
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their instrumentation for this paper is available here: https://app.wandb.ai/ayush-thakur/paintlight/reports/Generate-Artificial-Lightning-Effect--VmlldzoxMTA2Mjg 📝 The paper "Generating Digital Painting Lighting Effects via RGB-space Geometry" is available here: https://lllyasviel.github.io/PaintingLight/ The brush synthesizer project is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/procedural-brush-synthesis-paper/ Image credits: BJPentecost - https://www.deviantart.com/bjpentecost Style2Paints Team Pepper and Carrot David Revoy - https://www.davidrevoy.com/tag/cc-by 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Rob Rowe, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zorna Ifehir. When I was a bachelor student and took on my first bigger undertaking in computer graphics in 2011, it was a research project for a feature-length movie where the goal was to be able to learn the brush strokes of an artist, you see the sample brush stroke here and what it could do is change the silhouette of a digital 3D object to appear as if it were drawn with this style. This way we could use an immense amount of perfectly model geometry and make them look as if they were drawn by an artist. The project was a combination of machine learning and computer graphics and got me hooked on this topic for life. So, this was about silhouettes, but what about being able to change the lighting? To address this problem, this new work promises something that sounds like science fiction. The input is a painting which is thought of as a collection of brush strokes. First the algorithm is trying to break down the image into these individual strokes. Here, on the left with A, you see the painting itself and B is the real collection of strokes that were used to create it. This is what the algorithm is trying to estimate it with and this colorful image visualizes the difference between the two. The blue color denotes regions where these brush strokes are estimated well and we can find more differences as we transition into the red colored regions. So, great, now that we have a bunch of these brush strokes, but what do we do with them? Well, let's add one more assumption into this system which is that the densely packed regions are going to be more affected by the lighting effects while the sparser regions will be less impacted. This way we can make the painting change as if we were to move our imaginary light source around. No painting skills or manual labor required. Wonderful. But some of the skeptical fellow scholars out there would immediately ask the question, it looks great, but how do we know if this really is good enough to be used in practice? The authors thought of that too and asked an artist to create some of these views by hand and what do you know, they are extremely good. Very close to the real deal and all this comes for free. Insanity. Now, we noted that the input for this algorithm is just one image. So, what about a cheeky experiment where we would add not a painting but a photo and pretend that it is a painting, can it relate it properly? Well, hold on to your papers and let's have a look. Here's the photo, the breakdown of the brush strokes if this were a painting and wow! Here are the lighting effects. It worked and if you enjoyed these results and would like to see more, make sure to have a look at this beautiful paper in the video description. For instance, here you see a comparison against previous works and it seems quite clear that it smokes the competition on a variety of test cases. And these papers they are comparing to are also quite recent. The pace of progress in computer graphics research is absolutely incredible. More on that in a moment. Also, just look at the information density here. This tiny diagram shows you exactly where the light source positions are. I remember looking at a paper on a similar topic that did not have this thing and it made the entirety of the work a great deal more challenging to evaluate properly. This kind of attention to detail might seem like a small thing but it makes all the difference for a great paper which this one is. The provided user study shows that these outputs can be generated within a matter of seconds and reinforces our hunch that most people prefer the outputs of the new technique to the previous ones. So much improvement in so little time. And this we can now create digital lighting effects from a single image for paintings and even photographs in a matter of seconds. What a time to be alive. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Your system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you are an academic or have an open source project you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zorna Ifehir."}, {"start": 4.8, "end": 10.18, "text": " When I was a bachelor student and took on my first bigger undertaking in computer graphics"}, {"start": 10.18, "end": 16.240000000000002, "text": " in 2011, it was a research project for a feature-length movie where the goal was to be able to"}, {"start": 16.240000000000002, "end": 21.56, "text": " learn the brush strokes of an artist, you see the sample brush stroke here and what it"}, {"start": 21.56, "end": 27.96, "text": " could do is change the silhouette of a digital 3D object to appear as if it were drawn"}, {"start": 27.96, "end": 29.64, "text": " with this style."}, {"start": 29.64, "end": 34.36, "text": " This way we could use an immense amount of perfectly model geometry and make them"}, {"start": 34.36, "end": 37.480000000000004, "text": " look as if they were drawn by an artist."}, {"start": 37.480000000000004, "end": 42.4, "text": " The project was a combination of machine learning and computer graphics and got me hooked"}, {"start": 42.4, "end": 44.4, "text": " on this topic for life."}, {"start": 44.4, "end": 50.480000000000004, "text": " So, this was about silhouettes, but what about being able to change the lighting?"}, {"start": 50.480000000000004, "end": 56.040000000000006, "text": " To address this problem, this new work promises something that sounds like science fiction."}, {"start": 56.04, "end": 61.2, "text": " The input is a painting which is thought of as a collection of brush strokes."}, {"start": 61.2, "end": 66.4, "text": " First the algorithm is trying to break down the image into these individual strokes."}, {"start": 66.4, "end": 73.28, "text": " Here, on the left with A, you see the painting itself and B is the real collection of strokes"}, {"start": 73.28, "end": 75.24, "text": " that were used to create it."}, {"start": 75.24, "end": 81.4, "text": " This is what the algorithm is trying to estimate it with and this colorful image visualizes"}, {"start": 81.4, "end": 83.48, "text": " the difference between the two."}, {"start": 83.48, "end": 88.44, "text": " The blue color denotes regions where these brush strokes are estimated well and we can"}, {"start": 88.44, "end": 93.16, "text": " find more differences as we transition into the red colored regions."}, {"start": 93.16, "end": 99.48, "text": " So, great, now that we have a bunch of these brush strokes, but what do we do with them?"}, {"start": 99.48, "end": 104.92, "text": " Well, let's add one more assumption into this system which is that the densely packed"}, {"start": 104.92, "end": 111.0, "text": " regions are going to be more affected by the lighting effects while the sparser regions"}, {"start": 111.0, "end": 112.88000000000001, "text": " will be less impacted."}, {"start": 112.88, "end": 118.08, "text": " This way we can make the painting change as if we were to move our imaginary light source"}, {"start": 118.08, "end": 119.32, "text": " around."}, {"start": 119.32, "end": 122.67999999999999, "text": " No painting skills or manual labor required."}, {"start": 122.67999999999999, "end": 124.36, "text": " Wonderful."}, {"start": 124.36, "end": 129.44, "text": " But some of the skeptical fellow scholars out there would immediately ask the question,"}, {"start": 129.44, "end": 136.2, "text": " it looks great, but how do we know if this really is good enough to be used in practice?"}, {"start": 136.2, "end": 141.32, "text": " The authors thought of that too and asked an artist to create some of these views by"}, {"start": 141.32, "end": 145.51999999999998, "text": " hand and what do you know, they are extremely good."}, {"start": 145.51999999999998, "end": 150.0, "text": " Very close to the real deal and all this comes for free."}, {"start": 150.0, "end": 151.0, "text": " Insanity."}, {"start": 151.0, "end": 155.72, "text": " Now, we noted that the input for this algorithm is just one image."}, {"start": 155.72, "end": 163.04, "text": " So, what about a cheeky experiment where we would add not a painting but a photo and pretend"}, {"start": 163.04, "end": 166.79999999999998, "text": " that it is a painting, can it relate it properly?"}, {"start": 166.79999999999998, "end": 170.32, "text": " Well, hold on to your papers and let's have a look."}, {"start": 170.32, "end": 179.56, "text": " Here's the photo, the breakdown of the brush strokes if this were a painting and wow!"}, {"start": 179.56, "end": 183.0, "text": " Here are the lighting effects."}, {"start": 183.0, "end": 187.76, "text": " It worked and if you enjoyed these results and would like to see more, make sure to have"}, {"start": 187.76, "end": 193.16, "text": " a look at this beautiful paper in the video description."}, {"start": 193.16, "end": 198.72, "text": " For instance, here you see a comparison against previous works and it seems quite clear"}, {"start": 198.72, "end": 202.64, "text": " that it smokes the competition on a variety of test cases."}, {"start": 202.64, "end": 206.6, "text": " And these papers they are comparing to are also quite recent."}, {"start": 206.6, "end": 211.44, "text": " The pace of progress in computer graphics research is absolutely incredible."}, {"start": 211.44, "end": 213.0, "text": " More on that in a moment."}, {"start": 213.0, "end": 216.44, "text": " Also, just look at the information density here."}, {"start": 216.44, "end": 220.76, "text": " This tiny diagram shows you exactly where the light source positions are."}, {"start": 220.76, "end": 226.07999999999998, "text": " I remember looking at a paper on a similar topic that did not have this thing and it made"}, {"start": 226.08, "end": 230.92000000000002, "text": " the entirety of the work a great deal more challenging to evaluate properly."}, {"start": 230.92000000000002, "end": 236.24, "text": " This kind of attention to detail might seem like a small thing but it makes all the difference"}, {"start": 236.24, "end": 239.64000000000001, "text": " for a great paper which this one is."}, {"start": 239.64000000000001, "end": 245.08, "text": " The provided user study shows that these outputs can be generated within a matter of seconds"}, {"start": 245.08, "end": 250.08, "text": " and reinforces our hunch that most people prefer the outputs of the new technique to the"}, {"start": 250.08, "end": 251.72000000000003, "text": " previous ones."}, {"start": 251.72000000000003, "end": 254.92000000000002, "text": " So much improvement in so little time."}, {"start": 254.92, "end": 260.44, "text": " And this we can now create digital lighting effects from a single image for paintings"}, {"start": 260.44, "end": 264.47999999999996, "text": " and even photographs in a matter of seconds."}, {"start": 264.47999999999996, "end": 266.28, "text": " What a time to be alive."}, {"start": 266.28, "end": 270.91999999999996, "text": " What you see here is an instrumentation of this exact paper we have talked about which"}, {"start": 270.91999999999996, "end": 273.4, "text": " was made by weights and biases."}, {"start": 273.4, "end": 279.03999999999996, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 279.03999999999996, "end": 283.44, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 283.44, "end": 288.28, "text": " Your system is designed to save you a ton of time and money and it is actively used in"}, {"start": 288.28, "end": 294.76, "text": " projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 294.76, "end": 299.96, "text": " And the best part is that if you are an academic or have an open source project you can use"}, {"start": 299.96, "end": 301.68, "text": " their tools for free."}, {"start": 301.68, "end": 304.28, "text": " It really is as good as it gets."}, {"start": 304.28, "end": 309.56, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 309.56, "end": 312.8, "text": " description and you can get a free demo today."}, {"start": 312.8, "end": 322.40000000000003, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=sTe_-YOccdM
Two Shots of Green Screen Please!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their instrumentation for this paper is available here: https://app.wandb.ai/stacey/greenscreen/reports/Two-Shots-to-Green-Screen%3A-Collage-with-Deep-Learning--VmlldzoxMDc4MjY 📝 The paper "Background Matting: The World is Your Green Screen" is available here: https://grail.cs.washington.edu/projects/background-matting/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Rob Rowe, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Nefahir, when shooting feature-length movies or just trying to hold meetings from home through Zoom or Skype, we can make our appearance a little more professional by hiding the maths we have in the background by changing it to something more pleasing. Of course, this can only happen if we have an algorithm at hand that can detect what the foreground and the background is, which typically is easiest when we have a green screen behind us that is easy to filter for even the simpler algorithms out there. However, of course, not everyone has a green screen at home and even for people who do may need to hold meetings out there in the wilderness. Unfortunately, this would mean that the problem statement is the exact opposite of what we've said or, in other words, the background is almost anything else but a green screen. So, is it possible to apply some of these newer neural network-based learning algorithms to tackle this problem? Well, this technique promises to make this problem much, much easier to solve. All we need to do is take two photographs, one with and one without the test subject and it will automatically predict an alpha-mat that isolates the test subject from the background. If you have a closer look, you'll see the first part of why this problem is difficult. This mat is not binary, so the final compositing process is given not as only foreground or only background for every pixel in the image, but there are parts typically around the silhouettes and hair that need to be blended together. This blending information is contained in the gray parts of the image and are especially difficult to predict. Let's have a look at some results. You see the captured background here and the input video below and you see that it is truly a sight to behold. It seems that this person is really just casually hanging out in front of a place that is definitely not a whiteboard. It even works in cases where the background or the camera itself is slightly in motion. Very cool. It really is much, much better than these previous techniques where you see the temporal coherence is typically a problem. This is the flickering that you see here which arises from the vastly different predictions for the alpha mat between neighboring frames in the video. Opposed to previous methods, this new technique shows very little of that. Excellent. Now, we noted that a little movement in the background is permissible, but it really means just a little. If things get too crazy back there, the outputs are also going to break down. This wizardry all works through a generative adversarial network in which one neural network generates the output results. This, by itself, didn't work all that well because the images used to train this neural network can differ greatly from the backgrounds that we record out there in the wild. In this work, the authors bridge the gap by introducing a detector network that tries to find faults in the output and tell the generator if it has failed to fool it. As the two neural networks do get out, they work and improve together yielding these incredible results. Note that there are plenty of more contributions in the paper, so please make sure to have a look for more details. What a time to be alive. But you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI to your research, GitHub and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Nefahir,"}, {"start": 4.4, "end": 10.8, "text": " when shooting feature-length movies or just trying to hold meetings from home through Zoom or Skype,"}, {"start": 10.8, "end": 16.080000000000002, "text": " we can make our appearance a little more professional by hiding the maths we have in the background"}, {"start": 16.080000000000002, "end": 18.8, "text": " by changing it to something more pleasing."}, {"start": 18.8, "end": 24.64, "text": " Of course, this can only happen if we have an algorithm at hand that can detect what the foreground"}, {"start": 24.64, "end": 30.880000000000003, "text": " and the background is, which typically is easiest when we have a green screen behind us that is"}, {"start": 30.880000000000003, "end": 37.120000000000005, "text": " easy to filter for even the simpler algorithms out there. However, of course, not everyone has a"}, {"start": 37.120000000000005, "end": 43.6, "text": " green screen at home and even for people who do may need to hold meetings out there in the wilderness."}, {"start": 43.6, "end": 49.519999999999996, "text": " Unfortunately, this would mean that the problem statement is the exact opposite of what we've said"}, {"start": 49.52, "end": 56.080000000000005, "text": " or, in other words, the background is almost anything else but a green screen. So, is it possible"}, {"start": 56.080000000000005, "end": 61.28, "text": " to apply some of these newer neural network-based learning algorithms to tackle this problem?"}, {"start": 61.84, "end": 66.80000000000001, "text": " Well, this technique promises to make this problem much, much easier to solve."}, {"start": 66.80000000000001, "end": 72.32000000000001, "text": " All we need to do is take two photographs, one with and one without the test subject"}, {"start": 72.32000000000001, "end": 78.48, "text": " and it will automatically predict an alpha-mat that isolates the test subject from the background."}, {"start": 78.48, "end": 83.36, "text": " If you have a closer look, you'll see the first part of why this problem is difficult."}, {"start": 83.36, "end": 90.56, "text": " This mat is not binary, so the final compositing process is given not as only foreground or only"}, {"start": 90.56, "end": 96.64, "text": " background for every pixel in the image, but there are parts typically around the silhouettes and"}, {"start": 96.64, "end": 102.72, "text": " hair that need to be blended together. This blending information is contained in the gray parts of"}, {"start": 102.72, "end": 108.0, "text": " the image and are especially difficult to predict. Let's have a look at some results."}, {"start": 108.0, "end": 113.84, "text": " You see the captured background here and the input video below and you see that it is truly"}, {"start": 113.84, "end": 120.08, "text": " a sight to behold. It seems that this person is really just casually hanging out in front of a"}, {"start": 120.08, "end": 128.32, "text": " place that is definitely not a whiteboard. It even works in cases where the background or the camera"}, {"start": 128.32, "end": 139.35999999999999, "text": " itself is slightly in motion. Very cool. It really is much, much better than these previous techniques"}, {"start": 139.35999999999999, "end": 145.51999999999998, "text": " where you see the temporal coherence is typically a problem. This is the flickering that you see here"}, {"start": 145.51999999999998, "end": 150.79999999999998, "text": " which arises from the vastly different predictions for the alpha mat between neighboring frames in the"}, {"start": 150.79999999999998, "end": 156.95999999999998, "text": " video. Opposed to previous methods, this new technique shows very little of that. Excellent."}, {"start": 156.96, "end": 163.04000000000002, "text": " Now, we noted that a little movement in the background is permissible, but it really means just a"}, {"start": 163.04000000000002, "end": 168.16, "text": " little. If things get too crazy back there, the outputs are also going to break down."}, {"start": 168.16, "end": 174.96, "text": " This wizardry all works through a generative adversarial network in which one neural network"}, {"start": 174.96, "end": 181.52, "text": " generates the output results. This, by itself, didn't work all that well because the images used to"}, {"start": 181.52, "end": 187.44, "text": " train this neural network can differ greatly from the backgrounds that we record out there in the"}, {"start": 187.44, "end": 193.84, "text": " wild. In this work, the authors bridge the gap by introducing a detector network that tries to"}, {"start": 193.84, "end": 200.32000000000002, "text": " find faults in the output and tell the generator if it has failed to fool it. As the two neural"}, {"start": 200.32000000000002, "end": 206.16000000000003, "text": " networks do get out, they work and improve together yielding these incredible results."}, {"start": 206.16000000000003, "end": 210.88, "text": " Note that there are plenty of more contributions in the paper, so please make sure to have a"}, {"start": 210.88, "end": 217.6, "text": " look for more details. What a time to be alive. But you see here is an instrumentation of this"}, {"start": 217.6, "end": 223.44, "text": " exact paper we have talked about which was made by weights and biases. I think organizing these"}, {"start": 223.44, "end": 229.35999999999999, "text": " experiments really showcases the usability of their system. Weight and biases provides tools to"}, {"start": 229.35999999999999, "end": 234.24, "text": " track your experiments in your deep learning projects. Their system is designed to save you a ton"}, {"start": 234.24, "end": 240.4, "text": " of time and money and it is actively used in projects at prestigious labs such as OpenAI to"}, {"start": 240.4, "end": 246.88, "text": " your research, GitHub and more. And the best part is that if you are an academic or have an"}, {"start": 246.88, "end": 252.8, "text": " open source project, you can use their tools for free. It really is as good as it gets."}, {"start": 252.8, "end": 259.12, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video description"}, {"start": 259.12, "end": 264.48, "text": " and you can get a free demo today. Our thanks to weights and biases for their long standing support"}, {"start": 264.48, "end": 269.36, "text": " and for helping us make better videos for you. Thanks for watching and for your generous support"}, {"start": 269.36, "end": 271.36, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9gX24m3kcjA
Can We Teach a Robot Hand To Keep Learning?
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "Efficient Adaptation for End-to-End Vision-Based Robotic Manipulation" is available here: https://sites.google.com/view/efficient-ft/home ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Rob Rowe, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. In 2019, researchers at OpenAI came up with an amazing learning algorithm that they deployed on a robot hand that was able to dexterously manipulate a Rubik's cube even when it was severely hamstrung. A good game plan to perform such a thing is to first solve the problem in a computer simulation where we can learn and iterate quickly and then transfer everything the agent learned there to the real world and hope that it obtained general knowledge that indeed can be applied to real tasks. Papers like these are some of my favorites. If you're one of our core Fellow scholars, you may remember that we talked about walking robots about 200 episodes ago. In this amazing paper, we witnessed a robot not only learning to walk, but it could also adjust its behavior and keep walking even if one or multiple legs lose power or get damaged. In this previous work, the key idea was to allow the robot to learn tasks such as walking not only in one optimal way, but to explore and build a map of many alternative motions relying on different body parts. Both of these papers teach us that working in the real world often shows us new and expected challenges to overcome. And this new paper offers a technique to adapt a robot arm to these challenges after it has been deployed into the real world. It is supposed to be able to pick up objects which sounds somewhat simple these days until we realize that new previously unseen objects may appear in the bin with different shapes or material models. For example, reflective and refractive objects are particularly perilous because they often show us more about their surroundings than about themselves. Lighting conditions may also change after deployment. The grippers length or shape may change and many, many other issues are likely to arise. Let's have a look at the lighting conditions part. Why would that be such an issue? The objects are the same, the scene looks nearly the same, so why is this a challenge? Well, if the lighting changes, the reflections change significantly and since the robot arm sees its reflection and thinks that it is a different object, it just keeps trying to grasp it. After some fine tuning, this method was able to increase the otherwise not too pleasant 32% success rate to 63%. Much, much better. Also, extending the gripper used to be somewhat of a problem but as you see here with this technique it is barely an issue anymore. Also, if we have a somewhat intelligent system and we move the position of the gripper around nothing really changes, so we would expect it to perform well. Does it? Well, let's have a look. Unfortunately, it just seems to be rotating around without too many meaningful actions. And now, hold on to your papers because after using this continual learning scheme, yes, it improved substantially and makes very few mistakes and can even pick up these tiny objects that are very challenging to grasp with this clumsy hand. This fine tuning step typically takes an additional hour or at most a few hours of extra training and can be used to help these AIs learn continuously after they are deployed in the real world, thereby updating and improving themselves. It is hard to define what exactly intelligence is, but an important component of it is being able to reuse knowledge and adapt to new, unseen situations. This is exactly what this paper helps with. Absolute witchcraft. What a time to be alive. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Unlike entry-level hosting services, Linode gives you full back-end access to your server, which is a step-up to powerful, fast, fully configurable cloud computing. Linode also has one-click apps that streamline your ability to deploy websites, personal VPNs, game servers, and more. If you need something as small as a personal online portfolio, Linode has your back and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers, or click the link in the description and give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 11.120000000000001, "text": " In 2019, researchers at OpenAI came up with an amazing learning algorithm that they deployed"}, {"start": 11.120000000000001, "end": 17.44, "text": " on a robot hand that was able to dexterously manipulate a Rubik's cube even when it was severely"}, {"start": 17.44, "end": 23.44, "text": " hamstrung. A good game plan to perform such a thing is to first solve the problem in a computer"}, {"start": 23.44, "end": 30.080000000000002, "text": " simulation where we can learn and iterate quickly and then transfer everything the agent learned"}, {"start": 30.080000000000002, "end": 36.400000000000006, "text": " there to the real world and hope that it obtained general knowledge that indeed can be applied to"}, {"start": 36.400000000000006, "end": 43.040000000000006, "text": " real tasks. Papers like these are some of my favorites. If you're one of our core Fellow scholars,"}, {"start": 43.040000000000006, "end": 49.760000000000005, "text": " you may remember that we talked about walking robots about 200 episodes ago. In this amazing paper,"}, {"start": 49.76, "end": 57.199999999999996, "text": " we witnessed a robot not only learning to walk, but it could also adjust its behavior and keep walking"}, {"start": 57.199999999999996, "end": 64.72, "text": " even if one or multiple legs lose power or get damaged. In this previous work, the key idea was to"}, {"start": 64.72, "end": 72.64, "text": " allow the robot to learn tasks such as walking not only in one optimal way, but to explore and build"}, {"start": 72.64, "end": 79.36, "text": " a map of many alternative motions relying on different body parts. Both of these papers"}, {"start": 79.36, "end": 85.52, "text": " teach us that working in the real world often shows us new and expected challenges to overcome."}, {"start": 85.52, "end": 92.16, "text": " And this new paper offers a technique to adapt a robot arm to these challenges after it has been"}, {"start": 92.16, "end": 98.16, "text": " deployed into the real world. It is supposed to be able to pick up objects which sounds somewhat"}, {"start": 98.16, "end": 104.72, "text": " simple these days until we realize that new previously unseen objects may appear in the bin"}, {"start": 104.72, "end": 111.2, "text": " with different shapes or material models. For example, reflective and refractive objects are"}, {"start": 111.2, "end": 116.88, "text": " particularly perilous because they often show us more about their surroundings than about themselves."}, {"start": 117.6, "end": 124.0, "text": " Lighting conditions may also change after deployment. The grippers length or shape may change"}, {"start": 124.0, "end": 129.76, "text": " and many, many other issues are likely to arise. Let's have a look at the lighting conditions part."}, {"start": 129.76, "end": 136.32, "text": " Why would that be such an issue? The objects are the same, the scene looks nearly the same,"}, {"start": 136.32, "end": 143.44, "text": " so why is this a challenge? Well, if the lighting changes, the reflections change significantly"}, {"start": 143.44, "end": 149.6, "text": " and since the robot arm sees its reflection and thinks that it is a different object, it just keeps"}, {"start": 149.6, "end": 156.56, "text": " trying to grasp it. After some fine tuning, this method was able to increase the otherwise not"}, {"start": 156.56, "end": 165.92000000000002, "text": " too pleasant 32% success rate to 63%. Much, much better. Also, extending the gripper used to be"}, {"start": 165.92000000000002, "end": 171.12, "text": " somewhat of a problem but as you see here with this technique it is barely an issue anymore."}, {"start": 173.12, "end": 178.8, "text": " Also, if we have a somewhat intelligent system and we move the position of the gripper around"}, {"start": 178.8, "end": 182.88, "text": " nothing really changes, so we would expect it to perform well."}, {"start": 182.88, "end": 189.6, "text": " Does it? Well, let's have a look. Unfortunately, it just seems to be rotating around"}, {"start": 189.6, "end": 195.84, "text": " without too many meaningful actions. And now, hold on to your papers because after using this"}, {"start": 195.84, "end": 202.72, "text": " continual learning scheme, yes, it improved substantially and makes very few mistakes and can"}, {"start": 202.72, "end": 207.76, "text": " even pick up these tiny objects that are very challenging to grasp with this clumsy hand."}, {"start": 207.76, "end": 215.44, "text": " This fine tuning step typically takes an additional hour or at most a few hours of extra training"}, {"start": 215.44, "end": 221.92, "text": " and can be used to help these AIs learn continuously after they are deployed in the real world,"}, {"start": 221.92, "end": 228.16, "text": " thereby updating and improving themselves. It is hard to define what exactly intelligence is,"}, {"start": 228.16, "end": 235.04, "text": " but an important component of it is being able to reuse knowledge and adapt to new, unseen situations."}, {"start": 235.04, "end": 241.84, "text": " This is exactly what this paper helps with. Absolute witchcraft. What a time to be alive."}, {"start": 241.84, "end": 247.2, "text": " This episode has been supported by Linode. Linode is the world's largest independent cloud"}, {"start": 247.2, "end": 252.72, "text": " computing provider. Unlike entry-level hosting services, Linode gives you full back-end access"}, {"start": 252.72, "end": 259.36, "text": " to your server, which is a step-up to powerful, fast, fully configurable cloud computing. Linode"}, {"start": 259.36, "end": 265.28000000000003, "text": " also has one-click apps that streamline your ability to deploy websites, personal VPNs,"}, {"start": 265.28000000000003, "end": 270.88, "text": " game servers, and more. If you need something as small as a personal online portfolio,"}, {"start": 270.88, "end": 276.16, "text": " Linode has your back and if you need to manage tons of clients' websites and reliably serve them"}, {"start": 276.16, "end": 282.64, "text": " to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances"}, {"start": 282.64, "end": 290.0, "text": " featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics"}, {"start": 290.0, "end": 295.76, "text": " projects. If only I had access to a tool like this while I was working on my last few papers."}, {"start": 295.76, "end": 301.84, "text": " To receive $20 in credit on your new Linode account, visit linode.com slash papers,"}, {"start": 301.84, "end": 305.84, "text": " or click the link in the description and give it a try today."}, {"start": 305.84, "end": 311.12, "text": " Our thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 311.12, "end": 315.92, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=u5wtoH0_KuA
This AI Does Nothing In Games…And Still Wins!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their instrumentation for this paper is available here: https://app.wandb.ai/stacey/aprl/reports/Adversarial-Policies-in-Multi-Agent-Settings--VmlldzoxMDEyNzE 📝 The paper "Adversarial Policies" is available here: https://adversarialpolicies.github.io 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Michael Albrecht, Nader S., Owen Campbell-Moore, Owen Skarpness, Rob Rowe, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. Today, it is almost taken for granted that neural network-based learning algorithms are capable of identifying objects in images or even write full coherent sentences about them, but fewer people know that there is also parallel research on trying to break these systems. For instance, some of these image detectors can be fooled by adding a little noise to the image and in some specialized cases, we can even perform something that is called the one pixel attack. Let's have a look at some examples. Changing just this one pixel can make a classifier think that this ship is a car or that this horse is a frog and, amusingly, be quite confident about its guess. Note that the choice of this pixel and the color is by no means random and it needs solving a mathematical optimization problem to find out exactly how to perform this. Trying to build better image detectors while other researchers are trying to break them is not the only arms race we are experiencing in machine learning research. For instance, a few years ago, deep-mind introduced an incredible learning algorithm that looked at the screen much like a human would, but was able to reach superhuman levels in playing a few Atari games. It was a spectacular milestone in AR research. They also just have published a follow-up paper on this that will cover very soon so make sure to subscribe and hit the bell icon to not miss it when it appears in the near future. Interestingly, while these learning algorithms are being improved at a staggering pace there is a parallel subfield where researchers endeavor to break these learning systems by slightly changing the information they are presented with. Let's have a look at OpenAI's example. Their first method adds a tiny bit of noise to a large portion of the video input where the difference is barely perceptible, but it forces the learning algorithm to choose a different action than it would have chosen otherwise. In the other one, a different modification was used that has a smaller footprint, but is more visible. For instance, in Punk, adding a tiny fake ball to the game can coerce the learner into going down when it was originally planning to go up. It is important to emphasize that the researchers did not do this by hand. The algorithm itself is able to pick up game-specific knowledge by itself and find out how to fool the other AI using it. Both attacks perform remarkably well. However, it is not always true that we can just change these images or the playing environment to our desire to fool these algorithms. So, with this, an even more interesting question arises. Is it possible to just enter the game as a player and perform interesting stunts that can reliably win against these AI's? And with this, we have arrived to the subject of today's paper. This is the Uchalnav Pass game where the red agent is trying to hold back the blue character and not let it cross the line. Here, you see two regular AI's duking it out. Sometimes the red wins. Sometimes the blue is able to get through. Nothing too crazy here. This is the reference case which is somewhat well-balanced. And now, hold on to your papers because this adversarial agent that this new paper proposes does this. You may think this was some kind of glitch and I put the incorrect footage here by accident. No, this is not an error. You can believe your eyes. It basically collapses and does absolutely nothing. This can't be a useful strategy. Can it? Well, look at that. It still wins the majority of the time. This is very confusing. How can that be? Let's have a closer look. This red agent is normally a somewhat competent player. As you can see here, it can punch the blue victim and make it fall. We now replaced this red player with the adversarial agent which collapsed and it almost feels like it hypnotized the blue agent to also fall. And now, squeeze your papers because the normal red opponent's win rate was 47% and this collapsing chap wins 86% of the time. It not only wins but it wins much, much more reliably than a competent AI. What is this wizardry? The answer is that the adversary induces of distribution activations. To understand what that exactly means, let's have a look at this chart. This tells us how likely it is that the actions of the AI against different opponents are normal. As you see, when this agent named Zoo plays against itself, the bars are in the positive region, meaning that normal things are happening. Things go as expected. However, that's not the case for the blue lines, which are the actions when we play against this adversarial agent in which case the blue victim's actions are not normal in the slightest. So, the adversarial agent is really doing nothing but it is doing nothing in a way that reprograms its opponent to make mistakes and behave close to a completely randomly acting agent. This paper is absolute insanity. I love it. And if you look here, you see that the more the blue curve improves, the better this scheme works for a given game. For instance, it is doing real good on kick and defend, fairly good on sumo humans, and that there is something about the sumo and game that prevents this interesting kind of hypnosis from happening. I'd love to see a follow-up paper that can pull this off a little more reliably. What a time to be alive. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you're an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir."}, {"start": 4.6000000000000005, "end": 10.4, "text": " Today, it is almost taken for granted that neural network-based learning algorithms are capable"}, {"start": 10.4, "end": 16.4, "text": " of identifying objects in images or even write full coherent sentences about them,"}, {"start": 16.4, "end": 22.0, "text": " but fewer people know that there is also parallel research on trying to break these systems."}, {"start": 22.0, "end": 27.7, "text": " For instance, some of these image detectors can be fooled by adding a little noise to the image"}, {"start": 27.7, "end": 34.5, "text": " and in some specialized cases, we can even perform something that is called the one pixel attack."}, {"start": 34.5, "end": 36.5, "text": " Let's have a look at some examples."}, {"start": 36.5, "end": 42.8, "text": " Changing just this one pixel can make a classifier think that this ship is a car"}, {"start": 42.8, "end": 49.3, "text": " or that this horse is a frog and, amusingly, be quite confident about its guess."}, {"start": 49.3, "end": 53.9, "text": " Note that the choice of this pixel and the color is by no means random"}, {"start": 53.9, "end": 60.4, "text": " and it needs solving a mathematical optimization problem to find out exactly how to perform this."}, {"start": 60.4, "end": 65.0, "text": " Trying to build better image detectors while other researchers are trying to break them"}, {"start": 65.0, "end": 69.3, "text": " is not the only arms race we are experiencing in machine learning research."}, {"start": 69.3, "end": 74.8, "text": " For instance, a few years ago, deep-mind introduced an incredible learning algorithm"}, {"start": 74.8, "end": 77.9, "text": " that looked at the screen much like a human would,"}, {"start": 77.9, "end": 82.9, "text": " but was able to reach superhuman levels in playing a few Atari games."}, {"start": 82.9, "end": 86.5, "text": " It was a spectacular milestone in AR research."}, {"start": 86.5, "end": 91.30000000000001, "text": " They also just have published a follow-up paper on this that will cover very soon"}, {"start": 91.30000000000001, "end": 96.7, "text": " so make sure to subscribe and hit the bell icon to not miss it when it appears in the near future."}, {"start": 96.7, "end": 101.60000000000001, "text": " Interestingly, while these learning algorithms are being improved at a staggering pace"}, {"start": 101.60000000000001, "end": 106.7, "text": " there is a parallel subfield where researchers endeavor to break these learning systems"}, {"start": 106.7, "end": 110.5, "text": " by slightly changing the information they are presented with."}, {"start": 110.5, "end": 112.9, "text": " Let's have a look at OpenAI's example."}, {"start": 112.9, "end": 118.2, "text": " Their first method adds a tiny bit of noise to a large portion of the video input"}, {"start": 118.2, "end": 120.6, "text": " where the difference is barely perceptible,"}, {"start": 120.6, "end": 127.8, "text": " but it forces the learning algorithm to choose a different action than it would have chosen otherwise."}, {"start": 127.8, "end": 132.6, "text": " In the other one, a different modification was used that has a smaller footprint,"}, {"start": 132.6, "end": 134.5, "text": " but is more visible."}, {"start": 134.5, "end": 141.1, "text": " For instance, in Punk, adding a tiny fake ball to the game can coerce the learner into going down"}, {"start": 141.1, "end": 144.3, "text": " when it was originally planning to go up."}, {"start": 144.3, "end": 148.7, "text": " It is important to emphasize that the researchers did not do this by hand."}, {"start": 148.7, "end": 153.8, "text": " The algorithm itself is able to pick up game-specific knowledge by itself"}, {"start": 153.8, "end": 157.7, "text": " and find out how to fool the other AI using it."}, {"start": 157.7, "end": 160.7, "text": " Both attacks perform remarkably well."}, {"start": 160.7, "end": 164.79999999999998, "text": " However, it is not always true that we can just change these images"}, {"start": 164.79999999999998, "end": 168.79999999999998, "text": " or the playing environment to our desire to fool these algorithms."}, {"start": 168.79999999999998, "end": 173.0, "text": " So, with this, an even more interesting question arises."}, {"start": 173.0, "end": 178.39999999999998, "text": " Is it possible to just enter the game as a player and perform interesting stunts"}, {"start": 178.39999999999998, "end": 181.5, "text": " that can reliably win against these AI's?"}, {"start": 181.5, "end": 185.39999999999998, "text": " And with this, we have arrived to the subject of today's paper."}, {"start": 185.39999999999998, "end": 190.1, "text": " This is the Uchalnav Pass game where the red agent is trying to hold back"}, {"start": 190.1, "end": 193.7, "text": " the blue character and not let it cross the line."}, {"start": 193.7, "end": 196.9, "text": " Here, you see two regular AI's duking it out."}, {"start": 196.9, "end": 198.7, "text": " Sometimes the red wins."}, {"start": 198.7, "end": 201.29999999999998, "text": " Sometimes the blue is able to get through."}, {"start": 201.29999999999998, "end": 203.29999999999998, "text": " Nothing too crazy here."}, {"start": 203.29999999999998, "end": 207.0, "text": " This is the reference case which is somewhat well-balanced."}, {"start": 207.0, "end": 211.0, "text": " And now, hold on to your papers because this adversarial agent"}, {"start": 211.0, "end": 215.2, "text": " that this new paper proposes does this."}, {"start": 215.2, "end": 217.5, "text": " You may think this was some kind of glitch"}, {"start": 217.5, "end": 220.4, "text": " and I put the incorrect footage here by accident."}, {"start": 220.4, "end": 222.3, "text": " No, this is not an error."}, {"start": 222.3, "end": 223.9, "text": " You can believe your eyes."}, {"start": 223.9, "end": 228.4, "text": " It basically collapses and does absolutely nothing."}, {"start": 228.4, "end": 230.5, "text": " This can't be a useful strategy."}, {"start": 230.5, "end": 231.5, "text": " Can it?"}, {"start": 231.5, "end": 233.1, "text": " Well, look at that."}, {"start": 233.1, "end": 235.7, "text": " It still wins the majority of the time."}, {"start": 235.7, "end": 237.4, "text": " This is very confusing."}, {"start": 237.4, "end": 239.1, "text": " How can that be?"}, {"start": 239.1, "end": 240.5, "text": " Let's have a closer look."}, {"start": 240.5, "end": 244.2, "text": " This red agent is normally a somewhat competent player."}, {"start": 244.2, "end": 249.5, "text": " As you can see here, it can punch the blue victim and make it fall."}, {"start": 249.5, "end": 253.39999999999998, "text": " We now replaced this red player with the adversarial agent"}, {"start": 253.39999999999998, "end": 257.5, "text": " which collapsed and it almost feels like it hypnotized"}, {"start": 257.5, "end": 260.09999999999997, "text": " the blue agent to also fall."}, {"start": 260.09999999999997, "end": 263.8, "text": " And now, squeeze your papers because the normal red opponent's"}, {"start": 263.8, "end": 271.2, "text": " win rate was 47% and this collapsing chap wins 86% of the time."}, {"start": 271.2, "end": 275.2, "text": " It not only wins but it wins much, much more reliably"}, {"start": 275.2, "end": 277.0, "text": " than a competent AI."}, {"start": 277.0, "end": 278.7, "text": " What is this wizardry?"}, {"start": 278.7, "end": 281.3, "text": " The answer is that the adversary induces"}, {"start": 281.3, "end": 283.7, "text": " of distribution activations."}, {"start": 283.7, "end": 286.0, "text": " To understand what that exactly means,"}, {"start": 286.0, "end": 287.9, "text": " let's have a look at this chart."}, {"start": 287.9, "end": 291.7, "text": " This tells us how likely it is that the actions of the AI"}, {"start": 291.7, "end": 294.2, "text": " against different opponents are normal."}, {"start": 294.2, "end": 298.59999999999997, "text": " As you see, when this agent named Zoo plays against itself,"}, {"start": 298.6, "end": 303.6, "text": " the bars are in the positive region, meaning that normal things are happening."}, {"start": 303.6, "end": 305.90000000000003, "text": " Things go as expected."}, {"start": 305.90000000000003, "end": 308.6, "text": " However, that's not the case for the blue lines,"}, {"start": 308.6, "end": 312.40000000000003, "text": " which are the actions when we play against this adversarial agent"}, {"start": 312.40000000000003, "end": 317.1, "text": " in which case the blue victim's actions are not normal in the slightest."}, {"start": 317.1, "end": 320.8, "text": " So, the adversarial agent is really doing nothing"}, {"start": 320.8, "end": 325.3, "text": " but it is doing nothing in a way that reprograms its opponent"}, {"start": 325.3, "end": 330.1, "text": " to make mistakes and behave close to a completely randomly acting agent."}, {"start": 330.1, "end": 333.1, "text": " This paper is absolute insanity."}, {"start": 333.1, "end": 334.40000000000003, "text": " I love it."}, {"start": 334.40000000000003, "end": 338.1, "text": " And if you look here, you see that the more the blue curve improves,"}, {"start": 338.1, "end": 340.6, "text": " the better this scheme works for a given game."}, {"start": 340.6, "end": 344.6, "text": " For instance, it is doing real good on kick and defend,"}, {"start": 344.6, "end": 347.1, "text": " fairly good on sumo humans,"}, {"start": 347.1, "end": 349.90000000000003, "text": " and that there is something about the sumo and game"}, {"start": 349.90000000000003, "end": 353.40000000000003, "text": " that prevents this interesting kind of hypnosis from happening."}, {"start": 353.4, "end": 358.0, "text": " I'd love to see a follow-up paper that can pull this off a little more reliably."}, {"start": 358.0, "end": 360.0, "text": " What a time to be alive."}, {"start": 360.0, "end": 364.9, "text": " What you see here is an instrumentation of this exact paper we have talked about"}, {"start": 364.9, "end": 367.5, "text": " which was made by weights and biases."}, {"start": 367.5, "end": 373.0, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 373.0, "end": 377.5, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 377.5, "end": 381.09999999999997, "text": " Their system is designed to save you a ton of time and money"}, {"start": 381.1, "end": 384.5, "text": " and it is actively used in projects at prestigious labs,"}, {"start": 384.5, "end": 388.70000000000005, "text": " such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 388.70000000000005, "end": 391.70000000000005, "text": " And the best part is that if you're an academic"}, {"start": 391.70000000000005, "end": 395.90000000000003, "text": " or have an open source project, you can use their tools for free."}, {"start": 395.90000000000003, "end": 398.40000000000003, "text": " It really is as good as it gets."}, {"start": 398.40000000000003, "end": 402.1, "text": " Make sure to visit them through wnb.com slash papers"}, {"start": 402.1, "end": 404.70000000000005, "text": " or just click the link in the video description"}, {"start": 404.70000000000005, "end": 406.90000000000003, "text": " and you can get a free demo today."}, {"start": 406.9, "end": 412.7, "text": " Our thanks to weights and biases for their long standing support and for helping us make better videos for you."}, {"start": 412.7, "end": 442.5, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=i4KWiq3guRU
Finally, A Blazing Fast Fluid Simulator! 🌊
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers The shown blog post is available here: https://www.wandb.com/articles/visualize-lightgbm-performance-in-one-line-of-code 📝 The paper "Fast Fluid Simulations with Sparse Volumes on the GPU" and some code samples are available here: - https://people.csail.mit.edu/kuiwu/gvdb_sim.html - https://www.researchgate.net/publication/325488464_Fast_Fluid_Simulations_with_Sparse_Volumes_on_the_GPU 📸 Our Instagram page is available here: https://www.instagram.com/twominutepapers/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Michael Albrecht, Nader S., Owen Campbell-Moore, Rob Rowe, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. With the nimble progress we are seeing in computer graphics research, it is now not only possible to perform beautiful fluid simulations, but we can also simulate more advanced effects, such as honey coiling, ferrofluids climbing up on other objects, and a variety of similar advanced effects. However, due to the complexity of these techniques, we often have to wait for several seconds, or even minutes for every single one of these images, which often means that we have to leave our computer crunching these scenes overnight. Or even wait several days for the results to appear. But what about real-time applications? Can we perform these fluid simulations in a more reasonable timeframe? Well, this technique offers detailed fluid simulations like the one here and is blazing fast. The reason for this is that one, it uses a sparse volume representation, and two, it supports parallel computation and runs on your graphics card. So, what do these terms really mean? Let's start with the sparse part. With classical fluid simulation techniques, the simulation domain has to be declared in advance, and is typically confined to a cube. This comes with several disadvantages. For instance, if we wish to have a piece of fluid or smoke coming out of this cube, we are out of luck. The simulation domain stays, so we would have to know in advance how the simulation pans out, which we don't. Now, the first thing you're probably thinking about, well, of course, make the simulation domain bigger. Yes, but. Unless special measures are taken, the bigger the domain, the more we have to compute. Even the empty parts take some computation. Ouch. This means that we have to confine the simulation to as small a domain as we can. So, this is where this technique comes into play. The sparse representation that it uses means that the simulation domain can take any form. As you see here, it just starts altering the shape of the simulation domain as the fluid splashes out of it. Furthermore, we are not only not doing work in the empty parts of the domain, which is a huge efficiency increase, but we don't need to allocate too much additional memory for these regions, which you will see in a minute will be a key part of the value proposition of this technique. We noted that it supports parallel computation and runs on your graphics card. The graphics card part is key because otherwise it would run on your processor like most of the techniques that require minutes per frame. The more complex the technique is, typically, the more likely that it runs on your processor, which has a few cores, two, a few tenths, of course. However, your graphics card, comparably, is almost a supercomputer as it has up to a few hundred or even a few thousand cores to compute on. So, why not use that? Well, it's not that simple, and here is where the word parallel is key. If the problem can be decomposed into smaller independent problems, they can be allocated to many many cores that can work independently and much more efficiently. This is exactly what this paper does with the fluid simulation. It runs it on your graphics card, and hence, it is typically 10 to 20 times faster than the equivalent techniques running on your processor. Let me try to demonstrate this with an example. Let's talk about coffee. You see, making coffee is not a very parallel task. If you ask a person to make coffee, it can typically be done in a few seconds. However, if you suddenly put 30 people in the kitchen and ask them to make coffee, it will not only not be a faster process, but may even be slower than one person because of two reasons. One, it is hard to coordinate 30 people, and there will be miscommunication, and two, there are very few tools and lots of people, so they won't be able to help each other or much worse, will just hold each other up. If we could formulate the coffee making problem, such that we need 30 units of coffee, and we have 30 kitchens, we could just place one person into each kitchen, and then they could work efficiently and independently. At the risk of oversimplifying the situation, this is an intuition of what this technique does, and hence, it runs on your graphics card and is incredibly fast. Also, note that your graphics card typically has a limited amount of memory, and remember, we noted that the sparse representation makes it very gentle on memory usage, making this the perfect algorithm for creating detailed, large-scale fluid simulations quickly. Excellent design. I plan to post slow-down versions of some of the footage that you see here to our Instagram page if you feel that it is something you would enjoy, make sure to follow us there. Just search for two-minute papers on Instagram to find it, or also, as always, the link is in the video description. And finally, hold on to your papers because if you look here, you see that the damn break scene can be simulated with about 5 frames per second, not seconds per frame, while the water drop scene can run about 7 frames per second with a few million particles. We can, of course, scale up the simulation, and then we are back at seconds per frame land, but it is still blazing fast. If you look here, we can go up to 27 times faster, so in one all-nighter simulation, I can simulate what I could simulate in nearly a month. Sign me up. What a time to be alive. Now, note that in the early days of two-minute papers, about 300, 400 episodes ago, I covered plenty of papers on fluid simulations, however, nearly no one really showed up to watch them. Before publishing any of these videos, I was like, here we go again, I knew that almost nobody would watch it, but this is a series where I set out to share the love for these papers. I believe we can learn a lot from these works, and if no one watches them, so be it. I still love doing this. But I was surprised to find out that over the years, something has changed. You fell off scholars somehow, started to love the fluids, and I am delighted to see that. So, thank you so much for trusting the process, showing up, and watching these videos. I hope you're enjoying watching these as much as I enjoyed making them. This episode has been supported by weights and biases. Here, they show you how to make it to the top of Kaggle leaderboards by using weights and biases to find the best model faster than everyone else. Wates and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you're an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 7.92, "text": " With the nimble progress we are seeing in computer graphics research,"}, {"start": 7.92, "end": 12.56, "text": " it is now not only possible to perform beautiful fluid simulations,"}, {"start": 12.56, "end": 18.0, "text": " but we can also simulate more advanced effects, such as honey coiling,"}, {"start": 18.0, "end": 24.32, "text": " ferrofluids climbing up on other objects, and a variety of similar advanced effects."}, {"start": 24.32, "end": 27.52, "text": " However, due to the complexity of these techniques,"}, {"start": 27.52, "end": 33.84, "text": " we often have to wait for several seconds, or even minutes for every single one of these images,"}, {"start": 33.84, "end": 39.12, "text": " which often means that we have to leave our computer crunching these scenes overnight."}, {"start": 39.12, "end": 43.04, "text": " Or even wait several days for the results to appear."}, {"start": 43.04, "end": 45.519999999999996, "text": " But what about real-time applications?"}, {"start": 45.519999999999996, "end": 49.28, "text": " Can we perform these fluid simulations in a more reasonable timeframe?"}, {"start": 49.28, "end": 57.04, "text": " Well, this technique offers detailed fluid simulations like the one here and is blazing fast."}, {"start": 57.04, "end": 62.08, "text": " The reason for this is that one, it uses a sparse volume representation,"}, {"start": 62.08, "end": 67.52000000000001, "text": " and two, it supports parallel computation and runs on your graphics card."}, {"start": 67.52000000000001, "end": 70.24000000000001, "text": " So, what do these terms really mean?"}, {"start": 70.24000000000001, "end": 72.4, "text": " Let's start with the sparse part."}, {"start": 72.4, "end": 77.92, "text": " With classical fluid simulation techniques, the simulation domain has to be declared in advance,"}, {"start": 77.92, "end": 80.8, "text": " and is typically confined to a cube."}, {"start": 80.8, "end": 83.68, "text": " This comes with several disadvantages."}, {"start": 83.68, "end": 89.12, "text": " For instance, if we wish to have a piece of fluid or smoke coming out of this cube,"}, {"start": 89.12, "end": 90.48, "text": " we are out of luck."}, {"start": 90.48, "end": 96.48, "text": " The simulation domain stays, so we would have to know in advance how the simulation pans out,"}, {"start": 96.48, "end": 97.84, "text": " which we don't."}, {"start": 97.84, "end": 102.64, "text": " Now, the first thing you're probably thinking about, well, of course,"}, {"start": 102.64, "end": 105.12, "text": " make the simulation domain bigger."}, {"start": 105.12, "end": 106.88, "text": " Yes, but."}, {"start": 106.88, "end": 111.92, "text": " Unless special measures are taken, the bigger the domain, the more we have to compute."}, {"start": 111.92, "end": 114.39999999999999, "text": " Even the empty parts take some computation."}, {"start": 115.11999999999999, "end": 116.08, "text": " Ouch."}, {"start": 116.08, "end": 120.47999999999999, "text": " This means that we have to confine the simulation to as small a domain as we can."}, {"start": 121.11999999999999, "end": 124.08, "text": " So, this is where this technique comes into play."}, {"start": 124.08, "end": 130.16, "text": " The sparse representation that it uses means that the simulation domain can take any form."}, {"start": 130.16, "end": 134.96, "text": " As you see here, it just starts altering the shape of the simulation domain"}, {"start": 134.96, "end": 136.72, "text": " as the fluid splashes out of it."}, {"start": 137.36, "end": 142.08, "text": " Furthermore, we are not only not doing work in the empty parts of the domain,"}, {"start": 142.08, "end": 147.12, "text": " which is a huge efficiency increase, but we don't need to allocate too much additional memory"}, {"start": 147.12, "end": 152.72, "text": " for these regions, which you will see in a minute will be a key part of the value proposition"}, {"start": 152.72, "end": 153.44, "text": " of this technique."}, {"start": 154.24, "end": 159.84, "text": " We noted that it supports parallel computation and runs on your graphics card."}, {"start": 159.84, "end": 165.6, "text": " The graphics card part is key because otherwise it would run on your processor like most of the"}, {"start": 165.6, "end": 167.92000000000002, "text": " techniques that require minutes per frame."}, {"start": 168.48000000000002, "end": 173.84, "text": " The more complex the technique is, typically, the more likely that it runs on your processor,"}, {"start": 173.84, "end": 177.36, "text": " which has a few cores, two, a few tenths, of course."}, {"start": 178.48000000000002, "end": 185.04, "text": " However, your graphics card, comparably, is almost a supercomputer as it has up to a few hundred"}, {"start": 185.04, "end": 187.92000000000002, "text": " or even a few thousand cores to compute on."}, {"start": 187.92, "end": 190.23999999999998, "text": " So, why not use that?"}, {"start": 191.04, "end": 196.16, "text": " Well, it's not that simple, and here is where the word parallel is key."}, {"start": 196.16, "end": 200.32, "text": " If the problem can be decomposed into smaller independent problems,"}, {"start": 200.32, "end": 206.79999999999998, "text": " they can be allocated to many many cores that can work independently and much more efficiently."}, {"start": 207.35999999999999, "end": 211.2, "text": " This is exactly what this paper does with the fluid simulation."}, {"start": 211.2, "end": 216.72, "text": " It runs it on your graphics card, and hence, it is typically 10 to 20 times faster"}, {"start": 216.72, "end": 220.0, "text": " than the equivalent techniques running on your processor."}, {"start": 220.0, "end": 222.32, "text": " Let me try to demonstrate this with an example."}, {"start": 222.88, "end": 224.48, "text": " Let's talk about coffee."}, {"start": 224.48, "end": 228.07999999999998, "text": " You see, making coffee is not a very parallel task."}, {"start": 228.07999999999998, "end": 232.48, "text": " If you ask a person to make coffee, it can typically be done in a few seconds."}, {"start": 233.04, "end": 238.0, "text": " However, if you suddenly put 30 people in the kitchen and ask them to make coffee,"}, {"start": 238.0, "end": 243.92, "text": " it will not only not be a faster process, but may even be slower than one person"}, {"start": 243.92, "end": 248.64, "text": " because of two reasons. One, it is hard to coordinate 30 people,"}, {"start": 248.64, "end": 254.56, "text": " and there will be miscommunication, and two, there are very few tools and lots of people,"}, {"start": 254.56, "end": 259.36, "text": " so they won't be able to help each other or much worse, will just hold each other up."}, {"start": 259.68, "end": 264.88, "text": " If we could formulate the coffee making problem, such that we need 30 units of coffee,"}, {"start": 264.88, "end": 269.76, "text": " and we have 30 kitchens, we could just place one person into each kitchen,"}, {"start": 269.76, "end": 274.15999999999997, "text": " and then they could work efficiently and independently."}, {"start": 274.15999999999997, "end": 279.76, "text": " At the risk of oversimplifying the situation, this is an intuition of what this technique does,"}, {"start": 279.76, "end": 284.48, "text": " and hence, it runs on your graphics card and is incredibly fast."}, {"start": 285.03999999999996, "end": 289.84, "text": " Also, note that your graphics card typically has a limited amount of memory,"}, {"start": 289.84, "end": 295.59999999999997, "text": " and remember, we noted that the sparse representation makes it very gentle on memory usage,"}, {"start": 295.6, "end": 301.76000000000005, "text": " making this the perfect algorithm for creating detailed, large-scale fluid simulations quickly."}, {"start": 302.08000000000004, "end": 307.68, "text": " Excellent design. I plan to post slow-down versions of some of the footage that you see here"}, {"start": 307.68, "end": 312.96000000000004, "text": " to our Instagram page if you feel that it is something you would enjoy, make sure to follow us there."}, {"start": 313.6, "end": 319.36, "text": " Just search for two-minute papers on Instagram to find it, or also, as always, the link is in the"}, {"start": 319.36, "end": 325.76, "text": " video description. And finally, hold on to your papers because if you look here, you see that the"}, {"start": 325.76, "end": 331.6, "text": " damn break scene can be simulated with about 5 frames per second, not seconds per frame,"}, {"start": 331.6, "end": 338.08000000000004, "text": " while the water drop scene can run about 7 frames per second with a few million particles."}, {"start": 338.40000000000003, "end": 344.72, "text": " We can, of course, scale up the simulation, and then we are back at seconds per frame land,"}, {"start": 344.72, "end": 351.28000000000003, "text": " but it is still blazing fast. If you look here, we can go up to 27 times faster,"}, {"start": 351.84000000000003, "end": 357.28000000000003, "text": " so in one all-nighter simulation, I can simulate what I could simulate in nearly a month."}, {"start": 357.92, "end": 363.92, "text": " Sign me up. What a time to be alive. Now, note that in the early days of two-minute papers,"}, {"start": 363.92, "end": 369.6, "text": " about 300, 400 episodes ago, I covered plenty of papers on fluid simulations,"}, {"start": 369.6, "end": 375.12, "text": " however, nearly no one really showed up to watch them. Before publishing any of these videos,"}, {"start": 375.12, "end": 381.68, "text": " I was like, here we go again, I knew that almost nobody would watch it, but this is a series where"}, {"start": 381.68, "end": 387.36, "text": " I set out to share the love for these papers. I believe we can learn a lot from these works,"}, {"start": 387.36, "end": 393.76000000000005, "text": " and if no one watches them, so be it. I still love doing this. But I was surprised to find out"}, {"start": 393.76, "end": 400.64, "text": " that over the years, something has changed. You fell off scholars somehow, started to love the fluids,"}, {"start": 400.64, "end": 405.59999999999997, "text": " and I am delighted to see that. So, thank you so much for trusting the process,"}, {"start": 405.59999999999997, "end": 410.96, "text": " showing up, and watching these videos. I hope you're enjoying watching these as much as I enjoyed"}, {"start": 410.96, "end": 416.96, "text": " making them. This episode has been supported by weights and biases. Here, they show you how to"}, {"start": 416.96, "end": 422.64, "text": " make it to the top of Kaggle leaderboards by using weights and biases to find the best model"}, {"start": 422.64, "end": 428.15999999999997, "text": " faster than everyone else. Wates and biases provides tools to track your experiments in your"}, {"start": 428.15999999999997, "end": 433.36, "text": " deep learning projects. Their system is designed to save you a ton of time and money, and it is"}, {"start": 433.36, "end": 440.47999999999996, "text": " actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 440.47999999999996, "end": 446.24, "text": " And the best part is that if you're an academic or have an open source project, you can use their"}, {"start": 446.24, "end": 453.12, "text": " tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash"}, {"start": 453.12, "end": 458.64, "text": " papers, or just click the link in the video description, and you can get a free demo today."}, {"start": 458.64, "end": 463.52, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better"}, {"start": 463.52, "end": 477.35999999999996, "text": " videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MPdj8KGZHa0
Neural Network Dreams About Beautiful Natural Scenes
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers The shown blog post is available here: https://www.wandb.com/articles/better-models-faster-with-weights-biases 📝 The paper "Manipulating Attributes of Natural Scenes via Hallucination" is available here: https://hucvl.github.io/attribute_hallucination/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Michael Albrecht, Nader S., Owen Campbell-Moore, Rob Rowe, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Neural network image source: https://en.wikipedia.org/wiki/File:Neural_network_example.svg Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #ai #machinelearning
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. In the last few years, the pace of progress in machine learning research has been staggering. Neural network-based learning algorithms are now able to look at an image and describe what's seen in this image, or even better, the other way around generating images from a written description. You see here, a set of results from BigGAN, a state-of-the-art image generation technique, and marvel at the fact that all of these images are indeed synthetic. TheGAN part of this technique abbreviates the term generative adversarial network. This means a pair of neural networks that battle each other over time to master a task, for instance, to generate realistic-looking images when given a theme. After that, styleGAN and even its second version appeared, which, among many other crazy good features, opened up the possibility to lock in several aspects of these images, for instance, age, pose, some facial features, and more, and then we could mix them with other images to our liking while retaining these locked-in aspects. I am loving the fact that these newer research works are moving in a direction of more artistic control and the paper we'll discuss today also takes a step in this direction. With this new work, we can ask to translate our image into different seasons, weather conditions, time of day, and more. Let's have a look. Here we have our input and the imagine that we'd like to add more clouds and translate it into a different time of the day, and there we go. Wow! Or we can take this snowy landscape image and translate it into a blooming flowery field. This truly seems like black magic, so I can't wait to look under the hood and see what is going on. The input is our source image and the set of attributes where we can describe our artistic vision. For instance, here let's ask the AI to add some more vegetation to this scene. That will do. Step number one, this artistic description is rooted to a scene generation network which hallucinates an image that fits our description. Well, that's great. As you see here, it kind of resembles the input image, but still it is substantially different. So, why is that? If you look here, you see that it also takes the layout of our image as an input, or in other words, the colors and the silhouettes describe what part of the image contains a lake, vegetation, clouds, and more. It creates the hallucination according to that, so we have more clouds, that's great, but the road here has been left out. So now we are stuck with an image that only kind of resembles what we want. What do we do now? Now, step number two, let's not use this hallucinated image directly, but apply its artistic style to our source image. Brilliant. Now we have our content, but with more vegetation. However, remember that we have the layout of the input image, that is a gold mine of information. So, are you thinking what I am thinking? Yes, including this indeed opens up a killer application. We can even change the scene around by modifying the labels on this layout, for instance, by adding some mountains, make it a grassy field, and add the lake. Making a scene from scratch from a simple starting point is also possible. Just add some mountains, trees, a lake, and you are good to go. And then you can use the other part of the algorithm to transform it into a different season, time of day, or even make it foggy. What a time to be alive. Now, as with every research work, there is still room for improvements. For instance, I find that it is hard to define what it means to have a cloudier image. For instance, the hallucination here works according to the specification. It indeed has more clouds than this. But, for instance, here I am unsure if we have more clouds in the output. You see that perhaps it is even less than in the input. It seems that not all of them made it to the final image. Also, do fewer and denser clouds qualify as cloudier. Nonetheless, I think this is going to be an awesome tool as is, and I can only imagine how cool it will become two more papers down the line. This episode has been supported by weights and biases. In this post, they show you how to easily iterate on models by visualizing and comparing experiments in real time. Weight and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 9.52, "text": " In the last few years, the pace of progress in machine learning research has been staggering."}, {"start": 9.52, "end": 14.56, "text": " Neural network-based learning algorithms are now able to look at an image and describe what's"}, {"start": 14.56, "end": 21.28, "text": " seen in this image, or even better, the other way around generating images from a written description."}, {"start": 21.6, "end": 26.8, "text": " You see here, a set of results from BigGAN, a state-of-the-art image generation technique,"}, {"start": 26.8, "end": 31.36, "text": " and marvel at the fact that all of these images are indeed synthetic."}, {"start": 31.36, "end": 36.480000000000004, "text": " TheGAN part of this technique abbreviates the term generative adversarial network."}, {"start": 36.480000000000004, "end": 41.760000000000005, "text": " This means a pair of neural networks that battle each other over time to master a task,"}, {"start": 41.760000000000005, "end": 46.08, "text": " for instance, to generate realistic-looking images when given a theme."}, {"start": 46.08, "end": 50.08, "text": " After that, styleGAN and even its second version appeared,"}, {"start": 50.08, "end": 55.28, "text": " which, among many other crazy good features, opened up the possibility to lock in"}, {"start": 55.28, "end": 62.08, "text": " several aspects of these images, for instance, age, pose, some facial features, and more,"}, {"start": 62.08, "end": 68.24000000000001, "text": " and then we could mix them with other images to our liking while retaining these locked-in aspects."}, {"start": 68.24000000000001, "end": 73.68, "text": " I am loving the fact that these newer research works are moving in a direction of more artistic"}, {"start": 73.68, "end": 78.72, "text": " control and the paper we'll discuss today also takes a step in this direction."}, {"start": 78.72, "end": 84.96000000000001, "text": " With this new work, we can ask to translate our image into different seasons, weather conditions,"}, {"start": 84.96, "end": 90.24, "text": " time of day, and more. Let's have a look. Here we have our input and the"}, {"start": 90.24, "end": 95.67999999999999, "text": " imagine that we'd like to add more clouds and translate it into a different time of the day,"}, {"start": 95.67999999999999, "end": 103.6, "text": " and there we go. Wow! Or we can take this snowy landscape image and translate it into a blooming"}, {"start": 103.6, "end": 110.39999999999999, "text": " flowery field. This truly seems like black magic, so I can't wait to look under the hood"}, {"start": 110.4, "end": 116.08000000000001, "text": " and see what is going on. The input is our source image and the set of attributes"}, {"start": 116.08000000000001, "end": 122.48, "text": " where we can describe our artistic vision. For instance, here let's ask the AI to add some more"}, {"start": 122.48, "end": 129.44, "text": " vegetation to this scene. That will do. Step number one, this artistic description is rooted to a"}, {"start": 129.44, "end": 135.6, "text": " scene generation network which hallucinates an image that fits our description. Well, that's great."}, {"start": 135.6, "end": 141.68, "text": " As you see here, it kind of resembles the input image, but still it is substantially different."}, {"start": 141.68, "end": 148.32, "text": " So, why is that? If you look here, you see that it also takes the layout of our image as an input,"}, {"start": 148.32, "end": 154.72, "text": " or in other words, the colors and the silhouettes describe what part of the image contains a lake,"}, {"start": 154.72, "end": 160.48, "text": " vegetation, clouds, and more. It creates the hallucination according to that,"}, {"start": 160.48, "end": 167.28, "text": " so we have more clouds, that's great, but the road here has been left out. So now we are stuck"}, {"start": 167.28, "end": 174.48, "text": " with an image that only kind of resembles what we want. What do we do now? Now, step number two,"}, {"start": 174.48, "end": 181.35999999999999, "text": " let's not use this hallucinated image directly, but apply its artistic style to our source image."}, {"start": 181.35999999999999, "end": 188.0, "text": " Brilliant. Now we have our content, but with more vegetation. However, remember that we have the"}, {"start": 188.0, "end": 194.88, "text": " layout of the input image, that is a gold mine of information. So, are you thinking what I am thinking?"}, {"start": 195.52, "end": 202.08, "text": " Yes, including this indeed opens up a killer application. We can even change the scene around"}, {"start": 202.08, "end": 206.48, "text": " by modifying the labels on this layout, for instance, by adding some mountains,"}, {"start": 206.48, "end": 221.35999999999999, "text": " make it a grassy field, and add the lake. Making a scene from scratch from a simple starting point"}, {"start": 221.35999999999999, "end": 228.72, "text": " is also possible. Just add some mountains, trees, a lake, and you are good to go. And then you can"}, {"start": 228.72, "end": 235.2, "text": " use the other part of the algorithm to transform it into a different season, time of day, or even make"}, {"start": 235.2, "end": 242.32, "text": " it foggy. What a time to be alive. Now, as with every research work, there is still room for improvements."}, {"start": 242.32, "end": 247.83999999999997, "text": " For instance, I find that it is hard to define what it means to have a cloudier image."}, {"start": 247.83999999999997, "end": 253.67999999999998, "text": " For instance, the hallucination here works according to the specification. It indeed has more clouds"}, {"start": 253.67999999999998, "end": 260.64, "text": " than this. But, for instance, here I am unsure if we have more clouds in the output. You see that"}, {"start": 260.64, "end": 267.12, "text": " perhaps it is even less than in the input. It seems that not all of them made it to the final image."}, {"start": 267.12, "end": 274.15999999999997, "text": " Also, do fewer and denser clouds qualify as cloudier. Nonetheless, I think this is going to be an"}, {"start": 274.15999999999997, "end": 279.84, "text": " awesome tool as is, and I can only imagine how cool it will become two more papers down the line."}, {"start": 280.4, "end": 286.24, "text": " This episode has been supported by weights and biases. In this post, they show you how to easily"}, {"start": 286.24, "end": 292.48, "text": " iterate on models by visualizing and comparing experiments in real time. Weight and biases"}, {"start": 292.48, "end": 297.44, "text": " provide tools to track your experiments in your deep learning projects. Their system is designed"}, {"start": 297.44, "end": 302.88, "text": " to save you a ton of time and money, and it is actively used in projects at prestigious labs,"}, {"start": 302.88, "end": 310.16, "text": " such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic"}, {"start": 310.16, "end": 316.88000000000005, "text": " or have an open source project, you can use their tools for free. It really is as good as it gets."}, {"start": 316.88000000000005, "end": 323.20000000000005, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video description"}, {"start": 323.20000000000005, "end": 328.56, "text": " and you can get a free demo today. Our thanks to weights and biases for their long-standing support"}, {"start": 328.56, "end": 333.44000000000005, "text": " and for helping us make better videos for you. Thanks for watching and for your generous support,"}, {"start": 333.44, "end": 343.44, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=rGOy9rqGX1k
What’s Inside a Neural Network?
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers The shown blog post is available here: https://www.wandb.com/articles/visualize-xgboost-in-one-line 📝 The paper "Zoom In: An Introduction to Circuits" is available here: https://distill.pub/2020/circuits/zoom-in/ Followup article: https://distill.pub/2020/circuits/early-vision/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Daniel Hasegan, Eric Haddad, Eric Martel, Javier Bustamante, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Michael Albrecht, Nader S., Owen Campbell-Moore, Rob Rowe, Robin Graham, Steef, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh More info if you would like to appear here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #ai #machinelearning
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. This paper is not your usual paper, but it does something quite novel. It appeared in the distilled journal, one of my favorites, which offers new and exciting ways of publishing beautiful, but unusual works aiming for exceptional clarity and readability. And of course, this new paper is no different. It claims that despite the fact that these neural network-based learning algorithms look almost unfathomably complex inside, if we look under the hood, we can often find meaningful algorithms in there. Well, I am quite excited for this, so sign me up. Let's have a look at an example. At the risk of oversimplifying the explanation, we can say that a neural network is given as a collection of neurons and connections. If you look here, you can see the visualization of three neurons. At first glance, they look like an absolute mess, don't they? Well, kind of, but upon closer inspection, we see that there is quite a bit of structure here. For instance, the upper part looks like a car window. The next one resembles a car body, and the bottom of the third neuron clearly contains a wheel detector. However, no car looks exactly like these neurons, so what does the network do with all this? Well, in the next layer, the neurons arise as a combination of neurons in the previous layers where we cherry pick parts of each neuron that we wish to use. So here, we'd read you see that we are exciting the upper part of this neuron to get the window, use roughly the entirety of the middle one, and use the bottom part of the third one to assemble this. And now we have a neuron in the next layer that will help us detect whether we see a car in an image or not. So cool. I love this one. Let's look at another example. Here you see a dog head detector, but it kind of looks like a crazy Picasso painting where he tried to paint a human from not one angle like everyone else, but from all possible angles on one image. But this is a neural network. So why engage in this kind of insanity? Well, if we have a picture of a dog, the orientation of the head of the dog can be anything. It can be a frontal image, look from the left to right, right to left, and so on. So this is a pose invariant dog head detector. What this means is that it can detect many different orientations and look here. You see that it gets very excited by all of these good boys. I think we even have a squirrel in here. Good thing this is not the only neuron we have in the network to make a decision. I hope that it already shows that this is truly an ingenious design. If you have a look at the paper in the video description, which you should absolutely do, you'll see exactly how these neurons are built from the neurons in the previous layers. The article contains way more than this. You'll see a lot more dog snouts, curve detectors, and even a follow-up article that you can have a look at and even comment on before it gets finished. A huge thank you to Chris Ola, who devotes his time away from research and uses his own money to run this amazing journal, I cannot wait to cover more of these articles in future episodes, so make sure to subscribe and hit the bell icon to never miss any of those. So finally, we understand a little more how neural networks do all these amazing things they are able to do. What a time to be alive. This episode has been supported by weights and biases. Here, they show you how you can visualize the training process for your boosted trees with XG boost using that tool. If you have a closer look, you'll see that all you need is one line of code. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you're an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.1000000000000005, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 5.1000000000000005, "end": 9.56, "text": " This paper is not your usual paper, but it does something quite novel."}, {"start": 9.56, "end": 14.84, "text": " It appeared in the distilled journal, one of my favorites, which offers new and exciting"}, {"start": 14.84, "end": 21.52, "text": " ways of publishing beautiful, but unusual works aiming for exceptional clarity and readability."}, {"start": 21.52, "end": 24.36, "text": " And of course, this new paper is no different."}, {"start": 24.36, "end": 28.64, "text": " It claims that despite the fact that these neural network-based learning algorithms look"}, {"start": 28.64, "end": 35.2, "text": " almost unfathomably complex inside, if we look under the hood, we can often find meaningful"}, {"start": 35.2, "end": 36.6, "text": " algorithms in there."}, {"start": 36.6, "end": 40.92, "text": " Well, I am quite excited for this, so sign me up."}, {"start": 40.92, "end": 42.4, "text": " Let's have a look at an example."}, {"start": 42.4, "end": 47.6, "text": " At the risk of oversimplifying the explanation, we can say that a neural network is given"}, {"start": 47.6, "end": 50.760000000000005, "text": " as a collection of neurons and connections."}, {"start": 50.760000000000005, "end": 54.519999999999996, "text": " If you look here, you can see the visualization of three neurons."}, {"start": 54.519999999999996, "end": 58.2, "text": " At first glance, they look like an absolute mess, don't they?"}, {"start": 58.2, "end": 64.08, "text": " Well, kind of, but upon closer inspection, we see that there is quite a bit of structure"}, {"start": 64.08, "end": 65.08, "text": " here."}, {"start": 65.08, "end": 68.56, "text": " For instance, the upper part looks like a car window."}, {"start": 68.56, "end": 74.52000000000001, "text": " The next one resembles a car body, and the bottom of the third neuron clearly contains"}, {"start": 74.52000000000001, "end": 75.84, "text": " a wheel detector."}, {"start": 75.84, "end": 82.0, "text": " However, no car looks exactly like these neurons, so what does the network do with all this?"}, {"start": 82.0, "end": 87.32000000000001, "text": " Well, in the next layer, the neurons arise as a combination of neurons in the previous"}, {"start": 87.32, "end": 91.83999999999999, "text": " layers where we cherry pick parts of each neuron that we wish to use."}, {"start": 91.83999999999999, "end": 98.44, "text": " So here, we'd read you see that we are exciting the upper part of this neuron to get the window,"}, {"start": 98.44, "end": 103.72, "text": " use roughly the entirety of the middle one, and use the bottom part of the third one to"}, {"start": 103.72, "end": 106.11999999999999, "text": " assemble this."}, {"start": 106.11999999999999, "end": 111.32, "text": " And now we have a neuron in the next layer that will help us detect whether we see a car"}, {"start": 111.32, "end": 113.35999999999999, "text": " in an image or not."}, {"start": 113.35999999999999, "end": 114.35999999999999, "text": " So cool."}, {"start": 114.35999999999999, "end": 116.35999999999999, "text": " I love this one."}, {"start": 116.36, "end": 118.44, "text": " Let's look at another example."}, {"start": 118.44, "end": 124.36, "text": " Here you see a dog head detector, but it kind of looks like a crazy Picasso painting"}, {"start": 124.36, "end": 130.52, "text": " where he tried to paint a human from not one angle like everyone else, but from all possible"}, {"start": 130.52, "end": 132.6, "text": " angles on one image."}, {"start": 132.6, "end": 134.36, "text": " But this is a neural network."}, {"start": 134.36, "end": 137.2, "text": " So why engage in this kind of insanity?"}, {"start": 137.2, "end": 143.48, "text": " Well, if we have a picture of a dog, the orientation of the head of the dog can be anything."}, {"start": 143.48, "end": 149.32, "text": " It can be a frontal image, look from the left to right, right to left, and so on."}, {"start": 149.32, "end": 152.79999999999998, "text": " So this is a pose invariant dog head detector."}, {"start": 152.79999999999998, "end": 158.16, "text": " What this means is that it can detect many different orientations and look here."}, {"start": 158.16, "end": 162.07999999999998, "text": " You see that it gets very excited by all of these good boys."}, {"start": 162.07999999999998, "end": 164.76, "text": " I think we even have a squirrel in here."}, {"start": 164.76, "end": 168.76, "text": " Good thing this is not the only neuron we have in the network to make a decision."}, {"start": 168.76, "end": 173.44, "text": " I hope that it already shows that this is truly an ingenious design."}, {"start": 173.44, "end": 177.76, "text": " If you have a look at the paper in the video description, which you should absolutely do,"}, {"start": 177.76, "end": 182.92, "text": " you'll see exactly how these neurons are built from the neurons in the previous layers."}, {"start": 182.92, "end": 185.76, "text": " The article contains way more than this."}, {"start": 185.76, "end": 191.26, "text": " You'll see a lot more dog snouts, curve detectors, and even a follow-up article that you can"}, {"start": 191.26, "end": 195.04, "text": " have a look at and even comment on before it gets finished."}, {"start": 195.04, "end": 200.12, "text": " A huge thank you to Chris Ola, who devotes his time away from research and uses his own"}, {"start": 200.12, "end": 205.44, "text": " money to run this amazing journal, I cannot wait to cover more of these articles in future"}, {"start": 205.44, "end": 210.56, "text": " episodes, so make sure to subscribe and hit the bell icon to never miss any of those."}, {"start": 210.56, "end": 215.8, "text": " So finally, we understand a little more how neural networks do all these amazing things"}, {"start": 215.8, "end": 217.32, "text": " they are able to do."}, {"start": 217.32, "end": 218.96, "text": " What a time to be alive."}, {"start": 218.96, "end": 221.84, "text": " This episode has been supported by weights and biases."}, {"start": 221.84, "end": 226.72, "text": " Here, they show you how you can visualize the training process for your boosted trees"}, {"start": 226.72, "end": 229.28, "text": " with XG boost using that tool."}, {"start": 229.28, "end": 233.8, "text": " If you have a closer look, you'll see that all you need is one line of code."}, {"start": 233.8, "end": 238.2, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 238.2, "end": 242.92000000000002, "text": " Their system is designed to save you a ton of time and money, and it is actively used"}, {"start": 242.92000000000002, "end": 249.48, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 249.48, "end": 254.72, "text": " And the best part is that if you're an academic or have an open source project, you can use"}, {"start": 254.72, "end": 256.44, "text": " their tools for free."}, {"start": 256.44, "end": 259.04, "text": " It really is as good as it gets."}, {"start": 259.04, "end": 264.32, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 264.32, "end": 267.52000000000004, "text": " description and you can get a free demo today."}, {"start": 267.52000000000004, "end": 271.88, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 271.88, "end": 273.16, "text": " better videos for you."}, {"start": 273.16, "end": 303.12, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=54YvCE8_7lM
Is Simulating Soft and Bouncy Jelly Possible? 🦑
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "A Hybrid Material Point Method for Frictional Contact with Diverse Materials" is available here: - https://www.math.ucla.edu/~jteran/papers/HGGWJT19.pdf - https://www.math.ucla.edu/~qiguo/Hybrid_MPM.pdf ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 00:00 Physics simulations are amazing 00:47 Cracking and tearing is hard 01:35 Honey simulation 02:20 Finally, jello simulation! 03:10 Snow and hair works too 03:29 Skin works too 04:20 So what is the price? Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. After reading a physics textbook on the laws of fluid motion, with a little effort, we can make a virtual world come alive by writing a computer program that contains these laws resulting in beautiful fluid simulations like the one you see here. The amount of detail we can simulate with these programs is increasing every year, not only due to the fact that hardware improves over time, but also the pace of progress in computer graphics research is truly remarkable. To simulate all these, many recent methods built on top of a technique called the material point method. This is a hybrid simulation technique that uses both particles and grids to create these beautiful animations. However, when used by itself, we can come up with a bunch of phenomena that it cannot simulate properly. One such example is cracking and tearing phenomena which has been addressed in a previous paper that we covered a few videos ago. With this, we can smash Oreos, candy crabs, pumpkins, and much, much more. In a few minutes, I will show you how to combine some of these aspects of a simulation. It is going to be glorious, or maybe not so much. Just give me a moment and you'll see. Beyond that, when using this material point method, coupling problems frequently arise. This means that the sand is allowed to have an effect on the fluid, but at the same time, as the fluid sloshes around, it also moves the sand particles within. This is what we refer to as two-way coupling. If it is implemented correctly, our simulated honey will behave as real honey in the footage here and support the deeper. These are also not trivial to compute with the material point method and require specialized extensions to do so. So, what else is there to do? This amazing new paper provides an extension to handle simulating elastic objects such as hair, rubber, and you will see that it even works for skin simulations and it can handle their interactions with other materials. So why is this useful? Well, we know that we can pull off simulating a bunch of particles and a yellow simulation separately, so it's time for some experimentation. This is the one I promised earlier, so let's try to put these two things together and see what happens. It seems to start out okay, particles are bouncing off of the yellow and then, uh-oh, look, many of them seem to get stuck. So can we fix this somehow? Well, this is where this new paper comes into play. Look here, it starts out somewhat similarly, most of the particles get pushed away from the yellow and then, look, some of them indeed keep bouncing for a long, long time and none of them are stuck to the yellow. Glorious. We can see the same phenomenon here with three yellow blocks of different stiffness values. With this, we can also simulate more than 10,000 bouncy hair strands and to the delight of a computer graphics researcher, we can even throw snow into it and expect it to behave correctly. Braids work well too. And if you remember, I also promised some skin simulation and this demonstration is not only super fun, for instance, the ones around this area are perhaps the most entertaining, but the information density of this screen is just absolutely amazing. As we go from bottom to top, you can see the effect of the stiffness parameters or, in other words, the higher we are, the stiffer things become and as we go from left to right, the effect of damping increases. And you can see not only a bunch of combinations of these two parameters, but you can also compare many configurations against each other at a glance on the same screen, loving it. So how long does it take to simulate all this? Well, given that we are talking about an offline simulation technique, this is not designed to run in real time games as the execution time is typically not measured in frames per second, but seconds per frame and sometimes even minutes per frame. However, having run simulations that contain much fewer interactions than this that took me several days to compute, I would argue that these numbers are quite appealing for a method of this class. Also note that this is one of those papers that makes the impossible possible for us and of course, as we always say around here, two more papers down the line and it will be significantly improved. For now, I am very impressed. Time to fire up some elaborate yellow simulations. What a time to be alive. This episode has been supported by Lambda. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.64, "end": 9.58, "text": " After reading a physics textbook on the laws of fluid motion, with a little effort, we"}, {"start": 9.58, "end": 15.64, "text": " can make a virtual world come alive by writing a computer program that contains these laws"}, {"start": 15.64, "end": 20.28, "text": " resulting in beautiful fluid simulations like the one you see here."}, {"start": 20.28, "end": 25.16, "text": " The amount of detail we can simulate with these programs is increasing every year, not only"}, {"start": 25.16, "end": 30.68, "text": " due to the fact that hardware improves over time, but also the pace of progress in computer"}, {"start": 30.68, "end": 33.82, "text": " graphics research is truly remarkable."}, {"start": 33.82, "end": 39.16, "text": " To simulate all these, many recent methods built on top of a technique called the material"}, {"start": 39.16, "end": 40.16, "text": " point method."}, {"start": 40.16, "end": 45.72, "text": " This is a hybrid simulation technique that uses both particles and grids to create these"}, {"start": 45.72, "end": 47.32, "text": " beautiful animations."}, {"start": 47.32, "end": 52.72, "text": " However, when used by itself, we can come up with a bunch of phenomena that it cannot"}, {"start": 52.72, "end": 54.36, "text": " simulate properly."}, {"start": 54.36, "end": 59.04, "text": " One such example is cracking and tearing phenomena which has been addressed in a previous"}, {"start": 59.04, "end": 62.2, "text": " paper that we covered a few videos ago."}, {"start": 62.2, "end": 68.64, "text": " With this, we can smash Oreos, candy crabs, pumpkins, and much, much more."}, {"start": 68.64, "end": 73.72, "text": " In a few minutes, I will show you how to combine some of these aspects of a simulation."}, {"start": 73.72, "end": 78.8, "text": " It is going to be glorious, or maybe not so much."}, {"start": 78.8, "end": 81.03999999999999, "text": " Just give me a moment and you'll see."}, {"start": 81.04, "end": 86.56, "text": " Beyond that, when using this material point method, coupling problems frequently arise."}, {"start": 86.56, "end": 91.48, "text": " This means that the sand is allowed to have an effect on the fluid, but at the same time,"}, {"start": 91.48, "end": 96.32000000000001, "text": " as the fluid sloshes around, it also moves the sand particles within."}, {"start": 96.32000000000001, "end": 99.28, "text": " This is what we refer to as two-way coupling."}, {"start": 99.28, "end": 104.48, "text": " If it is implemented correctly, our simulated honey will behave as real honey in the footage"}, {"start": 104.48, "end": 107.72, "text": " here and support the deeper."}, {"start": 107.72, "end": 112.6, "text": " These are also not trivial to compute with the material point method and require specialized"}, {"start": 112.6, "end": 114.48, "text": " extensions to do so."}, {"start": 114.48, "end": 117.16, "text": " So, what else is there to do?"}, {"start": 117.16, "end": 123.0, "text": " This amazing new paper provides an extension to handle simulating elastic objects such as"}, {"start": 123.0, "end": 129.16, "text": " hair, rubber, and you will see that it even works for skin simulations and it can handle"}, {"start": 129.16, "end": 132.4, "text": " their interactions with other materials."}, {"start": 132.4, "end": 134.4, "text": " So why is this useful?"}, {"start": 134.4, "end": 140.16, "text": " Well, we know that we can pull off simulating a bunch of particles and a yellow simulation"}, {"start": 140.16, "end": 144.72, "text": " separately, so it's time for some experimentation."}, {"start": 144.72, "end": 149.92000000000002, "text": " This is the one I promised earlier, so let's try to put these two things together and see"}, {"start": 149.92000000000002, "end": 151.64000000000001, "text": " what happens."}, {"start": 151.64000000000001, "end": 159.76, "text": " It seems to start out okay, particles are bouncing off of the yellow and then, uh-oh, look,"}, {"start": 159.76, "end": 162.36, "text": " many of them seem to get stuck."}, {"start": 162.36, "end": 164.64000000000001, "text": " So can we fix this somehow?"}, {"start": 164.64000000000001, "end": 168.36, "text": " Well, this is where this new paper comes into play."}, {"start": 168.36, "end": 173.36, "text": " Look here, it starts out somewhat similarly, most of the particles get pushed away from"}, {"start": 173.36, "end": 180.60000000000002, "text": " the yellow and then, look, some of them indeed keep bouncing for a long, long time and none"}, {"start": 180.60000000000002, "end": 182.96, "text": " of them are stuck to the yellow."}, {"start": 182.96, "end": 184.48000000000002, "text": " Glorious."}, {"start": 184.48000000000002, "end": 189.64000000000001, "text": " We can see the same phenomenon here with three yellow blocks of different stiffness values."}, {"start": 189.64, "end": 195.48, "text": " With this, we can also simulate more than 10,000 bouncy hair strands and to the delight of"}, {"start": 195.48, "end": 201.2, "text": " a computer graphics researcher, we can even throw snow into it and expect it to behave"}, {"start": 201.2, "end": 205.48, "text": " correctly."}, {"start": 205.48, "end": 210.23999999999998, "text": " Braids work well too."}, {"start": 210.23999999999998, "end": 215.39999999999998, "text": " And if you remember, I also promised some skin simulation and this demonstration is not"}, {"start": 215.4, "end": 221.56, "text": " only super fun, for instance, the ones around this area are perhaps the most entertaining,"}, {"start": 221.56, "end": 226.76, "text": " but the information density of this screen is just absolutely amazing."}, {"start": 226.76, "end": 232.4, "text": " As we go from bottom to top, you can see the effect of the stiffness parameters or, in other"}, {"start": 232.4, "end": 237.96, "text": " words, the higher we are, the stiffer things become and as we go from left to right, the"}, {"start": 237.96, "end": 240.88, "text": " effect of damping increases."}, {"start": 240.88, "end": 245.92, "text": " And you can see not only a bunch of combinations of these two parameters, but you can also"}, {"start": 245.92, "end": 252.79999999999998, "text": " compare many configurations against each other at a glance on the same screen, loving it."}, {"start": 252.79999999999998, "end": 255.96, "text": " So how long does it take to simulate all this?"}, {"start": 255.96, "end": 261.15999999999997, "text": " Well, given that we are talking about an offline simulation technique, this is not designed"}, {"start": 261.15999999999997, "end": 266.4, "text": " to run in real time games as the execution time is typically not measured in frames per"}, {"start": 266.4, "end": 272.12, "text": " second, but seconds per frame and sometimes even minutes per frame."}, {"start": 272.12, "end": 277.15999999999997, "text": " However, having run simulations that contain much fewer interactions than this that took"}, {"start": 277.15999999999997, "end": 282.08, "text": " me several days to compute, I would argue that these numbers are quite appealing for a"}, {"start": 282.08, "end": 284.35999999999996, "text": " method of this class."}, {"start": 284.35999999999996, "end": 289.15999999999997, "text": " Also note that this is one of those papers that makes the impossible possible for us and"}, {"start": 289.15999999999997, "end": 293.67999999999995, "text": " of course, as we always say around here, two more papers down the line and it will be"}, {"start": 293.67999999999995, "end": 295.71999999999997, "text": " significantly improved."}, {"start": 295.72, "end": 298.16, "text": " For now, I am very impressed."}, {"start": 298.16, "end": 301.16, "text": " Time to fire up some elaborate yellow simulations."}, {"start": 301.16, "end": 303.04, "text": " What a time to be alive."}, {"start": 303.04, "end": 305.48, "text": " This episode has been supported by Lambda."}, {"start": 305.48, "end": 310.56, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 310.56, "end": 312.84000000000003, "text": " check out Lambda GPU Cloud."}, {"start": 312.84000000000003, "end": 317.44000000000005, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 317.44000000000005, "end": 320.68, "text": " that they are offering GPU Cloud services as well."}, {"start": 320.68, "end": 327.56, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 327.56, "end": 332.52, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 332.52, "end": 337.84000000000003, "text": " And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 337.84000000000003, "end": 340.08, "text": " AWS and Azure."}, {"start": 340.08, "end": 345.28000000000003, "text": " Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU"}, {"start": 345.28000000000003, "end": 346.6, "text": " instances today."}, {"start": 346.6, "end": 350.16, "text": " Our thanks to Lambda for helping us make better videos for you."}, {"start": 350.16, "end": 354.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=EWKAgwgqXB4
This AI Creates Beautiful Time Lapse Videos ☀️
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers The shown blog post is available here: https://www.wandb.com/articles/intro-to-cnns-with-wandb 📝 The paper "High-Resolution Daytime Translation Without Domain Labels" is available here: - https://saic-mdal.github.io/HiDT/ - https://github.com/saic-mdal/HiDT 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Joel Naifahir. A few years ago, we have mainly seen neural network-based techniques being used for image classification. This means that they were able to recognize objects, for instance animals and traffic signs in images. But today, with the incredible pace of machine learning research, we now have a selection of neural network-based techniques for not only classifying images, but also synthesizing them. The images that you see here and throughout this video is generated by one of these learning-based methods. But of course, in this series, we are always obsessed with artistic control, or, in other words, how much of a say we have in the creation of these images. After all, getting thousands and thousands of images without any overarching theme or artistic control is hardly useful for anyone. One way of being able to control the outputs is to use a technique that is capable of image translation. What you see here is a work by the name CycleGan. It could transform apples into oranges, zebras into horses, and more. It was called CycleGan because it introduced a cycle consistency loss function. This means that if we convert a summer image to a winter image, and then back to a summer image, we should get the same image back, or at least something very similar. If our learning system obeys this principle, the output quality of the translation is going to be significantly better. Today, we are going to study a more advanced image translation technique that takes this further. This paper is amazingly good at daytime image translation. It looks at a selection of landscape images, and then, as you see here, it learns to reimagine our input photos as if they were taken at different times of the day. I love how clouds form and move over time in the synthesized images, and the night sky with the stars is also truly a sight to behold. But wait, CycleGan and many other follow-up works did image translation. This also does image translation. So, what's really new here? Well, one, this work proposes a novel up-sampling scheme that helps creating output images with lots and lots of detail. Two, it can also create not just a bunch of images, a few hours apart, but it can also make beautiful time-lapse videos where the transitions are smooth. Oh my goodness, I love this. And three, the training happens by shoveling 20,000 landscape images into the neural network, and it becomes able to perform this translation task without labels. This means that we don't have to explicitly search for all the daytime images and tell the learner that these are daytime images and these other images are not. This is amazing, because the algorithm is able to learn by itself without labels, but it is also easier to use because we can feed in lots and lots more training data without having to label these images correctly. As a result, we now know that this daytime translation task is used as a testbed to demonstrate that this method can be reused for other kinds of image translation tasks. The fact that it can learn on its own and still compete with other works in this area is truly incredible. Due to this kind of generality, it can also perform other related tasks. For instance, it can perform style transfer, or in other words, not just change the time of day, but reimagine our pictures in the style of famous artists. I think with this paper, we have a really capable technique on our hands that is getting closer and closer to the point where they can see use in mainstream software packages and image editors. That would be absolutely amazing. If you have a closer look at the paper, you will see that it tries to minimize seven things at the same time. What a time to be alive! This episode has been supported by weights and biases. Here, they show you how to build a proper convolutional neural network for image classification and how to visualize the performance of your model. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Joel Naifahir."}, {"start": 4.36, "end": 10.56, "text": " A few years ago, we have mainly seen neural network-based techniques being used for image classification."}, {"start": 10.56, "end": 17.12, "text": " This means that they were able to recognize objects, for instance animals and traffic signs in images."}, {"start": 17.12, "end": 20.6, "text": " But today, with the incredible pace of machine learning research,"}, {"start": 20.6, "end": 26.0, "text": " we now have a selection of neural network-based techniques for not only classifying images,"}, {"start": 26.0, "end": 28.64, "text": " but also synthesizing them."}, {"start": 28.64, "end": 34.76, "text": " The images that you see here and throughout this video is generated by one of these learning-based methods."}, {"start": 34.76, "end": 39.64, "text": " But of course, in this series, we are always obsessed with artistic control,"}, {"start": 39.64, "end": 45.120000000000005, "text": " or, in other words, how much of a say we have in the creation of these images."}, {"start": 45.120000000000005, "end": 51.64, "text": " After all, getting thousands and thousands of images without any overarching theme or artistic control"}, {"start": 51.64, "end": 53.72, "text": " is hardly useful for anyone."}, {"start": 53.72, "end": 60.44, "text": " One way of being able to control the outputs is to use a technique that is capable of image translation."}, {"start": 60.44, "end": 63.8, "text": " What you see here is a work by the name CycleGan."}, {"start": 63.8, "end": 69.16, "text": " It could transform apples into oranges, zebras into horses, and more."}, {"start": 69.16, "end": 74.36, "text": " It was called CycleGan because it introduced a cycle consistency loss function."}, {"start": 74.36, "end": 78.32, "text": " This means that if we convert a summer image to a winter image,"}, {"start": 78.32, "end": 85.67999999999999, "text": " and then back to a summer image, we should get the same image back, or at least something very similar."}, {"start": 85.67999999999999, "end": 90.32, "text": " If our learning system obeys this principle, the output quality of the translation"}, {"start": 90.32, "end": 92.72, "text": " is going to be significantly better."}, {"start": 92.72, "end": 98.32, "text": " Today, we are going to study a more advanced image translation technique that takes this further."}, {"start": 98.32, "end": 102.44, "text": " This paper is amazingly good at daytime image translation."}, {"start": 102.44, "end": 104.91999999999999, "text": " It looks at a selection of landscape images,"}, {"start": 104.92, "end": 109.64, "text": " and then, as you see here, it learns to reimagine our input photos"}, {"start": 109.64, "end": 112.92, "text": " as if they were taken at different times of the day."}, {"start": 112.92, "end": 118.04, "text": " I love how clouds form and move over time in the synthesized images,"}, {"start": 118.04, "end": 122.52000000000001, "text": " and the night sky with the stars is also truly a sight to behold."}, {"start": 122.52000000000001, "end": 127.8, "text": " But wait, CycleGan and many other follow-up works did image translation."}, {"start": 127.8, "end": 130.12, "text": " This also does image translation."}, {"start": 130.12, "end": 132.68, "text": " So, what's really new here?"}, {"start": 132.68, "end": 136.84, "text": " Well, one, this work proposes a novel up-sampling scheme"}, {"start": 136.84, "end": 141.24, "text": " that helps creating output images with lots and lots of detail."}, {"start": 141.24, "end": 146.12, "text": " Two, it can also create not just a bunch of images, a few hours apart,"}, {"start": 146.12, "end": 152.04000000000002, "text": " but it can also make beautiful time-lapse videos where the transitions are smooth."}, {"start": 152.04000000000002, "end": 154.76000000000002, "text": " Oh my goodness, I love this."}, {"start": 154.76000000000002, "end": 160.68, "text": " And three, the training happens by shoveling 20,000 landscape images into the neural network,"}, {"start": 160.68, "end": 165.4, "text": " and it becomes able to perform this translation task without labels."}, {"start": 165.4, "end": 169.4, "text": " This means that we don't have to explicitly search for all the daytime images"}, {"start": 169.4, "end": 174.6, "text": " and tell the learner that these are daytime images and these other images are not."}, {"start": 174.6, "end": 180.20000000000002, "text": " This is amazing, because the algorithm is able to learn by itself without labels,"}, {"start": 180.20000000000002, "end": 185.24, "text": " but it is also easier to use because we can feed in lots and lots more training data"}, {"start": 185.24, "end": 188.04000000000002, "text": " without having to label these images correctly."}, {"start": 188.04, "end": 193.07999999999998, "text": " As a result, we now know that this daytime translation task is used as a testbed"}, {"start": 193.07999999999998, "end": 198.2, "text": " to demonstrate that this method can be reused for other kinds of image translation tasks."}, {"start": 199.07999999999998, "end": 204.44, "text": " The fact that it can learn on its own and still compete with other works in this area"}, {"start": 204.44, "end": 206.12, "text": " is truly incredible."}, {"start": 206.12, "end": 210.84, "text": " Due to this kind of generality, it can also perform other related tasks."}, {"start": 210.84, "end": 217.56, "text": " For instance, it can perform style transfer, or in other words, not just change the time of day,"}, {"start": 217.56, "end": 221.8, "text": " but reimagine our pictures in the style of famous artists."}, {"start": 221.8, "end": 225.88, "text": " I think with this paper, we have a really capable technique on our hands"}, {"start": 225.88, "end": 229.64000000000001, "text": " that is getting closer and closer to the point where they can see use"}, {"start": 229.64000000000001, "end": 232.6, "text": " in mainstream software packages and image editors."}, {"start": 233.32, "end": 235.32, "text": " That would be absolutely amazing."}, {"start": 235.88, "end": 239.72, "text": " If you have a closer look at the paper, you will see that it tries to minimize"}, {"start": 239.72, "end": 241.72, "text": " seven things at the same time."}, {"start": 242.28, "end": 243.56, "text": " What a time to be alive!"}, {"start": 244.04, "end": 247.4, "text": " This episode has been supported by weights and biases."}, {"start": 247.4, "end": 253.32, "text": " Here, they show you how to build a proper convolutional neural network for image classification"}, {"start": 253.32, "end": 256.76, "text": " and how to visualize the performance of your model."}, {"start": 256.76, "end": 261.24, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 261.24, "end": 264.76, "text": " Their system is designed to save you a ton of time and money,"}, {"start": 264.76, "end": 269.48, "text": " and it is actively used in projects at prestigious labs such as OpenAI,"}, {"start": 269.48, "end": 272.36, "text": " Toyota Research, GitHub, and more."}, {"start": 272.36, "end": 277.24, "text": " And the best part is that if you are an academic or have an open source project,"}, {"start": 277.24, "end": 279.40000000000003, "text": " you can use their tools for free."}, {"start": 279.40000000000003, "end": 281.96000000000004, "text": " It really is as good as it gets."}, {"start": 281.96000000000004, "end": 285.64, "text": " Make sure to visit them through wnb.com slash papers,"}, {"start": 285.64, "end": 288.28000000000003, "text": " or just click the link in the video description,"}, {"start": 288.28000000000003, "end": 290.52000000000004, "text": " and you can get a free demo today."}, {"start": 290.52000000000004, "end": 293.64, "text": " Our thanks to weights and biases for their longstanding support"}, {"start": 293.64, "end": 296.36, "text": " and for helping us make better videos for you."}, {"start": 296.36, "end": 298.6, "text": " Thanks for watching and for your generous support,"}, {"start": 298.6, "end": 308.6, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=bVXPnP8k6yo
This AI Learned to Summarize Videos 🎥
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "CLEVRER: CoLlision Events for Video REpresentation and Reasoning" is available here: http://clevrer.csail.mit.edu/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-95032/ Neural network image credit: https://en.wikipedia.org/wiki/Neural_network Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Kato Zonai-Fehir. Neural Network-based learning algorithms are making great leaps in a variety of areas. And many of us are wondering whether it is possible that one day we'll get a learning algorithm, show it a video, and ask it to summarize it, and we can then decide whether we wish to watch it or not. Or just describe what we are looking for, and it would fetch the appropriate videos for us. Thinking today's paper has a good pointer whether we can expect this to happen, and in a few moments we'll find out together why. A few years ago, these neural networks were mainly used for image classification, or in other words, they would tell us what kinds of objects are present in an image. But they are capable of so much more, for instance, these days we can get a recurrent neural network, write proper sentences about images, and it would work well for even highly non-trivial cases. For instance, it is able to infer that work is being done here, or that a ball is present in this image even if the vast majority of the ball itself is concealed. The even crazier thing about this is that this work is not recent at all, this is from a more than four-year-old paper. Insanity The first author of this paper was André Carpathy, one of the best minds in the game, who is currently the director of AI at Tesla, and works on making these cards able to drive themselves. So as amazing as this work was, progress in machine learning research keeps on accelerating. So let's have a look at this newer paper that takes it a step further and has a look not at an image, but a video and explains what happens there in. Very exciting. Let's have a look at an example. This was the input video and let's stop right at the first statement. The red sphere enters the scene. So, it was able to correctly identify not only what we are talking about in terms of color and shape, but also knows what this object is doing as well. That's a great start. Let's proceed further. Now it correctly identifies the collision event with the cylinder. Then, this cylinder hits another cylinder, very good, and look at that. It identifies that the cylinder is made of metal. I like that a lot because this particular object is made of a very reflective material, which shows us more about the surrounding room than the object itself. But we shouldn't only let the AI tell us what is going on on its own terms, let's ask questions and see if it can answer them correctly. So first, let's ask what is the material of the last object that hit the cyan cylinder, and it correctly finds that the answer is metal. Awesome. Now let's take it a step further and stop the video here, can it predict what is about to happen after this point? Look, it indeed can. This is remarkable because of two things. If we look under the hood, we see that to be able to pull this off, it not only has to understand what objects are present in the video and predict how they will interact, but also has to parse our questions correctly, put it all together and form an answer based on all this information. If any of these tasks works unreliably, the answer will be incorrect. And two, there are many other techniques that are able to do some of these tasks, so why is this one particularly interesting? Well, look here. This new method is able to do all of these tasks at the same time. So there we go, if this improves further, we might become able to search YouTube videos by just typing something that happens in the video and it would be able to automatically find it for us. That would be absolutely amazing. What a time to be alive. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Unlike entry-level hosting services, Linode gives you full backend access to your server, which is a step up to powerful, fast, fully configurable cloud computing. Linode also has one click apps that streamline your ability to deploy websites, personal VPNs, game servers and more. If you need something as small as a personal online portfolio, Linode has your back and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers or click the link in the description and give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Kato Zonai-Fehir."}, {"start": 4.8, "end": 9.92, "text": " Neural Network-based learning algorithms are making great leaps in a variety of areas."}, {"start": 9.92, "end": 14.76, "text": " And many of us are wondering whether it is possible that one day we'll get a learning"}, {"start": 14.76, "end": 21.080000000000002, "text": " algorithm, show it a video, and ask it to summarize it, and we can then decide whether"}, {"start": 21.080000000000002, "end": 23.32, "text": " we wish to watch it or not."}, {"start": 23.32, "end": 27.8, "text": " Or just describe what we are looking for, and it would fetch the appropriate videos for"}, {"start": 27.8, "end": 28.8, "text": " us."}, {"start": 28.8, "end": 33.76, "text": " Thinking today's paper has a good pointer whether we can expect this to happen, and in a"}, {"start": 33.76, "end": 37.04, "text": " few moments we'll find out together why."}, {"start": 37.04, "end": 42.08, "text": " A few years ago, these neural networks were mainly used for image classification, or in"}, {"start": 42.08, "end": 47.400000000000006, "text": " other words, they would tell us what kinds of objects are present in an image."}, {"start": 47.400000000000006, "end": 53.040000000000006, "text": " But they are capable of so much more, for instance, these days we can get a recurrent neural"}, {"start": 53.04, "end": 59.6, "text": " network, write proper sentences about images, and it would work well for even highly non-trivial"}, {"start": 59.6, "end": 60.68, "text": " cases."}, {"start": 60.68, "end": 66.44, "text": " For instance, it is able to infer that work is being done here, or that a ball is present"}, {"start": 66.44, "end": 72.32, "text": " in this image even if the vast majority of the ball itself is concealed."}, {"start": 72.32, "end": 77.36, "text": " The even crazier thing about this is that this work is not recent at all, this is from"}, {"start": 77.36, "end": 80.12, "text": " a more than four-year-old paper."}, {"start": 80.12, "end": 81.44, "text": " Insanity"}, {"start": 81.44, "end": 86.67999999999999, "text": " The first author of this paper was Andr\u00e9 Carpathy, one of the best minds in the game,"}, {"start": 86.67999999999999, "end": 92.0, "text": " who is currently the director of AI at Tesla, and works on making these cards able to drive"}, {"start": 92.0, "end": 93.32, "text": " themselves."}, {"start": 93.32, "end": 99.75999999999999, "text": " So as amazing as this work was, progress in machine learning research keeps on accelerating."}, {"start": 99.75999999999999, "end": 104.72, "text": " So let's have a look at this newer paper that takes it a step further and has a look"}, {"start": 104.72, "end": 110.44, "text": " not at an image, but a video and explains what happens there in."}, {"start": 110.44, "end": 111.44, "text": " Very exciting."}, {"start": 111.44, "end": 114.08, "text": " Let's have a look at an example."}, {"start": 114.08, "end": 118.24, "text": " This was the input video and let's stop right at the first statement."}, {"start": 118.24, "end": 120.32, "text": " The red sphere enters the scene."}, {"start": 120.32, "end": 126.44, "text": " So, it was able to correctly identify not only what we are talking about in terms of color"}, {"start": 126.44, "end": 130.96, "text": " and shape, but also knows what this object is doing as well."}, {"start": 130.96, "end": 132.56, "text": " That's a great start."}, {"start": 132.56, "end": 134.12, "text": " Let's proceed further."}, {"start": 134.12, "end": 138.07999999999998, "text": " Now it correctly identifies the collision event with the cylinder."}, {"start": 138.08, "end": 145.0, "text": " Then, this cylinder hits another cylinder, very good, and look at that."}, {"start": 145.0, "end": 148.20000000000002, "text": " It identifies that the cylinder is made of metal."}, {"start": 148.20000000000002, "end": 154.0, "text": " I like that a lot because this particular object is made of a very reflective material, which"}, {"start": 154.0, "end": 159.24, "text": " shows us more about the surrounding room than the object itself."}, {"start": 159.24, "end": 164.28, "text": " But we shouldn't only let the AI tell us what is going on on its own terms, let's ask"}, {"start": 164.28, "end": 168.04000000000002, "text": " questions and see if it can answer them correctly."}, {"start": 168.04, "end": 174.6, "text": " So first, let's ask what is the material of the last object that hit the cyan cylinder,"}, {"start": 174.6, "end": 178.64, "text": " and it correctly finds that the answer is metal."}, {"start": 178.64, "end": 181.72, "text": " Awesome."}, {"start": 181.72, "end": 187.12, "text": " Now let's take it a step further and stop the video here, can it predict what is about"}, {"start": 187.12, "end": 189.44, "text": " to happen after this point?"}, {"start": 189.44, "end": 192.04, "text": " Look, it indeed can."}, {"start": 192.04, "end": 194.92, "text": " This is remarkable because of two things."}, {"start": 194.92, "end": 199.67999999999998, "text": " If we look under the hood, we see that to be able to pull this off, it not only has"}, {"start": 199.67999999999998, "end": 205.32, "text": " to understand what objects are present in the video and predict how they will interact,"}, {"start": 205.32, "end": 211.44, "text": " but also has to parse our questions correctly, put it all together and form an answer based"}, {"start": 211.44, "end": 213.48, "text": " on all this information."}, {"start": 213.48, "end": 218.72, "text": " If any of these tasks works unreliably, the answer will be incorrect."}, {"start": 218.72, "end": 224.07999999999998, "text": " And two, there are many other techniques that are able to do some of these tasks, so"}, {"start": 224.08, "end": 226.96, "text": " why is this one particularly interesting?"}, {"start": 226.96, "end": 229.12, "text": " Well, look here."}, {"start": 229.12, "end": 233.68, "text": " This new method is able to do all of these tasks at the same time."}, {"start": 233.68, "end": 238.84, "text": " So there we go, if this improves further, we might become able to search YouTube videos"}, {"start": 238.84, "end": 243.88000000000002, "text": " by just typing something that happens in the video and it would be able to automatically"}, {"start": 243.88000000000002, "end": 245.56, "text": " find it for us."}, {"start": 245.56, "end": 248.04000000000002, "text": " That would be absolutely amazing."}, {"start": 248.04000000000002, "end": 249.88000000000002, "text": " What a time to be alive."}, {"start": 249.88, "end": 254.56, "text": " This episode has been supported by Linode. Linode is the world's largest independent cloud"}, {"start": 254.56, "end": 256.04, "text": " computing provider."}, {"start": 256.04, "end": 261.36, "text": " Unlike entry-level hosting services, Linode gives you full backend access to your server,"}, {"start": 261.36, "end": 266.4, "text": " which is a step up to powerful, fast, fully configurable cloud computing."}, {"start": 266.4, "end": 271.58, "text": " Linode also has one click apps that streamline your ability to deploy websites, personal"}, {"start": 271.58, "end": 274.56, "text": " VPNs, game servers and more."}, {"start": 274.56, "end": 279.56, "text": " If you need something as small as a personal online portfolio, Linode has your back and"}, {"start": 279.56, "end": 285.52, "text": " if you need to manage tons of clients' websites and reliably serve them to millions of visitors,"}, {"start": 285.52, "end": 287.2, "text": " Linode can do that too."}, {"start": 287.2, "end": 294.28000000000003, "text": " What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made"}, {"start": 294.28000000000003, "end": 299.12, "text": " for AI, scientific computing and computer graphics projects."}, {"start": 299.12, "end": 304.0, "text": " If only I had access to a tool like this while I was working on my last few papers."}, {"start": 304.0, "end": 310.84, "text": " To receive $20 in credit on your new Linode account, visit linode.com slash papers or click"}, {"start": 310.84, "end": 313.8, "text": " the link in the description and give it a try today."}, {"start": 313.8, "end": 318.92, "text": " Our thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 318.92, "end": 348.88, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=hFZlxpJPI5w
Sure, DeepFake Detectors Exist - But Can They Be Fooled?
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples" is available here: https://arxiv.org/abs/2002.12749 https://adversarialdeepfakes.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #Deepfake
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir, with the ascendancy of Neural Network-Based Learning Agrithams, we are now able to take on and defeat problems that sounded completely impossible just a few years ago. For instance, now we can create deepfakes, or in other words, we can record a short video of ourselves and transfer our gestures to a target subject, and this particular technique is so advanced that we don't even need a video of our target, just one still image. So we can even use paintings, images of sculptures, so yes, even the Mona Lisa works. However, don't despair, it's not all doom and gloom. A paper by the name Face for Enzyx contains a large dataset of original and manipulated video pairs. As this offered a ton of training data for real and forged videos, it became possible to use these to train a deepfake detector. You can see it here in action as these green-to-red colors showcase regions that the AI correctly thinks were tampered with. However, if we have access to a deepfake detector, we can also use it to improve our deepfake creating algorithms. And with this, an arms race has begun. The paper we are looking at today showcases this phenomenon. If you look here, you see this footage, which is very visibly fake, and the algorithm correctly concludes that. Now, if you look at this video, which for us looks like if it were the same video, yet it suddenly became real, at least the AI thinks that, of course, incorrectly. This is very confusing. So what really happened here? To understand what is going on here, we first have to talk about ostriches. So what do ostriches have to do with this insanity? Let me try to explain that. An adversarial attack on a neural network can be performed as follows. We present such a classifier network with an image of a bus, and it will successfully tell us that yes, this is indeed a bus. Nothing too crazy here. Now, we show it another image of a bus, but a bus plus some carefully crafted noise that is barely perceptible, that forces the neural network to misclassify it as an ostrich. I will stress that this is not any kind of noise, but the kind of noise that exploits biases in the neural network, which is by no means trivial to craft. However, if we succeed at that, this kind of adversarial attack can be pulled off on many different kinds of images. Everything that you see here on the right will be classified as an ostrich, but the neural network, these noise patterns were created for. And this can now be done not only on images, but videos as well, hence what happened a minute ago is that the deep fake video has been adversarially modified with noise to bypass such a detector. If you look here, you see that the authors have chosen excellent examples because some of these are clearly forged videos, which is initially recognized by the detector algorithm, but after adding the adversarial noise to it, the detector fails spectacularly. To demonstrate the utility of their technique, they have chosen the other examples to be much more subtle. Now, let's talk about one more question. We were talking about A detector algorithm, but there is not one detector out there, there are many, and we can change the wiring of these neural networks to have even more variation. So, what does it mean to fool a detector? Excellent question. The success rate of these adversarial videos, indeed, depends on the deep fake detector we are up against, but hold on to your papers because this success rate on uncompressed videos is over 98%, which is amazing, but note that when using video compression, this success rate may drop to 58% to 92% depending on the detector. This means that video compression and some other tricks involving image transformations still help us in defending against these adversarial attacks. What I also really like about the paper is that it discusses white and black box attacks separately. In the white box case, we know everything about the inner workings of the detector, including the neural network architecture and parameters, this is typically the easier case. But the technique also does really well in the black box case where we are not allowed to look under the hood of the detector, but we can show it a few videos and see how it reacts to them. This is a really cool work that gives us a more nuanced view about the current state of the art around deepfakes and deep fake detectors. I think it is best if we all know about the fact that these tools exist. If you wish to help us with this endeavor, please make sure to share this with your friends. Thank you. This episode has been supported by Lambda. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdalabs.com, slash papers, and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.76, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir, with the ascendancy"}, {"start": 5.76, "end": 11.24, "text": " of Neural Network-Based Learning Agrithams, we are now able to take on and defeat problems"}, {"start": 11.24, "end": 15.48, "text": " that sounded completely impossible just a few years ago."}, {"start": 15.48, "end": 21.240000000000002, "text": " For instance, now we can create deepfakes, or in other words, we can record a short video"}, {"start": 21.240000000000002, "end": 26.48, "text": " of ourselves and transfer our gestures to a target subject, and this particular technique"}, {"start": 26.48, "end": 32.4, "text": " is so advanced that we don't even need a video of our target, just one still image."}, {"start": 32.4, "end": 40.16, "text": " So we can even use paintings, images of sculptures, so yes, even the Mona Lisa works."}, {"start": 40.16, "end": 43.6, "text": " However, don't despair, it's not all doom and gloom."}, {"start": 43.6, "end": 49.400000000000006, "text": " A paper by the name Face for Enzyx contains a large dataset of original and manipulated"}, {"start": 49.400000000000006, "end": 50.92, "text": " video pairs."}, {"start": 50.92, "end": 56.08, "text": " As this offered a ton of training data for real and forged videos, it became possible"}, {"start": 56.08, "end": 59.6, "text": " to use these to train a deepfake detector."}, {"start": 59.6, "end": 65.64, "text": " You can see it here in action as these green-to-red colors showcase regions that the AI correctly"}, {"start": 65.64, "end": 67.56, "text": " thinks were tampered with."}, {"start": 67.56, "end": 73.56, "text": " However, if we have access to a deepfake detector, we can also use it to improve our deepfake"}, {"start": 73.56, "end": 75.32, "text": " creating algorithms."}, {"start": 75.32, "end": 78.08, "text": " And with this, an arms race has begun."}, {"start": 78.08, "end": 82.0, "text": " The paper we are looking at today showcases this phenomenon."}, {"start": 82.0, "end": 87.92, "text": " If you look here, you see this footage, which is very visibly fake, and the algorithm correctly"}, {"start": 87.92, "end": 89.4, "text": " concludes that."}, {"start": 89.4, "end": 95.84, "text": " Now, if you look at this video, which for us looks like if it were the same video, yet"}, {"start": 95.84, "end": 102.16, "text": " it suddenly became real, at least the AI thinks that, of course, incorrectly."}, {"start": 102.16, "end": 104.12, "text": " This is very confusing."}, {"start": 104.12, "end": 106.32, "text": " So what really happened here?"}, {"start": 106.32, "end": 111.44, "text": " To understand what is going on here, we first have to talk about ostriches."}, {"start": 111.44, "end": 115.12, "text": " So what do ostriches have to do with this insanity?"}, {"start": 115.12, "end": 116.64, "text": " Let me try to explain that."}, {"start": 116.64, "end": 121.16, "text": " An adversarial attack on a neural network can be performed as follows."}, {"start": 121.16, "end": 125.88, "text": " We present such a classifier network with an image of a bus, and it will successfully"}, {"start": 125.88, "end": 129.6, "text": " tell us that yes, this is indeed a bus."}, {"start": 129.6, "end": 130.96, "text": " Nothing too crazy here."}, {"start": 130.96, "end": 137.2, "text": " Now, we show it another image of a bus, but a bus plus some carefully crafted noise that"}, {"start": 137.2, "end": 142.83999999999997, "text": " is barely perceptible, that forces the neural network to misclassify it as an ostrich."}, {"start": 142.83999999999997, "end": 148.16, "text": " I will stress that this is not any kind of noise, but the kind of noise that exploits biases"}, {"start": 148.16, "end": 152.51999999999998, "text": " in the neural network, which is by no means trivial to craft."}, {"start": 152.51999999999998, "end": 157.2, "text": " However, if we succeed at that, this kind of adversarial attack can be pulled off on"}, {"start": 157.2, "end": 159.48, "text": " many different kinds of images."}, {"start": 159.48, "end": 163.51999999999998, "text": " Everything that you see here on the right will be classified as an ostrich, but the neural"}, {"start": 163.51999999999998, "end": 166.88, "text": " network, these noise patterns were created for."}, {"start": 166.88, "end": 172.56, "text": " And this can now be done not only on images, but videos as well, hence what happened a"}, {"start": 172.56, "end": 178.84, "text": " minute ago is that the deep fake video has been adversarially modified with noise to bypass"}, {"start": 178.84, "end": 180.88, "text": " such a detector."}, {"start": 180.88, "end": 185.64, "text": " If you look here, you see that the authors have chosen excellent examples because some"}, {"start": 185.64, "end": 191.72, "text": " of these are clearly forged videos, which is initially recognized by the detector algorithm,"}, {"start": 191.72, "end": 196.76, "text": " but after adding the adversarial noise to it, the detector fails spectacularly."}, {"start": 196.76, "end": 200.76, "text": " To demonstrate the utility of their technique, they have chosen the other examples to be"}, {"start": 200.76, "end": 202.48, "text": " much more subtle."}, {"start": 202.48, "end": 205.2, "text": " Now, let's talk about one more question."}, {"start": 205.2, "end": 210.35999999999999, "text": " We were talking about A detector algorithm, but there is not one detector out there, there"}, {"start": 210.35999999999999, "end": 215.84, "text": " are many, and we can change the wiring of these neural networks to have even more variation."}, {"start": 215.84, "end": 219.51999999999998, "text": " So, what does it mean to fool a detector?"}, {"start": 219.51999999999998, "end": 220.56, "text": " Excellent question."}, {"start": 220.56, "end": 225.48, "text": " The success rate of these adversarial videos, indeed, depends on the deep fake detector"}, {"start": 225.48, "end": 231.79999999999998, "text": " we are up against, but hold on to your papers because this success rate on uncompressed videos"}, {"start": 231.79999999999998, "end": 238.67999999999998, "text": " is over 98%, which is amazing, but note that when using video compression, this success"}, {"start": 238.67999999999998, "end": 244.28, "text": " rate may drop to 58% to 92% depending on the detector."}, {"start": 244.28, "end": 249.23999999999998, "text": " This means that video compression and some other tricks involving image transformations"}, {"start": 249.23999999999998, "end": 253.32, "text": " still help us in defending against these adversarial attacks."}, {"start": 253.32, "end": 258.64, "text": " What I also really like about the paper is that it discusses white and black box attacks"}, {"start": 258.64, "end": 260.08, "text": " separately."}, {"start": 260.08, "end": 265.24, "text": " In the white box case, we know everything about the inner workings of the detector, including"}, {"start": 265.24, "end": 271.4, "text": " the neural network architecture and parameters, this is typically the easier case."}, {"start": 271.4, "end": 276.15999999999997, "text": " But the technique also does really well in the black box case where we are not allowed"}, {"start": 276.15999999999997, "end": 281.76, "text": " to look under the hood of the detector, but we can show it a few videos and see how"}, {"start": 281.76, "end": 283.28, "text": " it reacts to them."}, {"start": 283.28, "end": 287.84, "text": " This is a really cool work that gives us a more nuanced view about the current state of"}, {"start": 287.84, "end": 291.28, "text": " the art around deepfakes and deep fake detectors."}, {"start": 291.28, "end": 295.47999999999996, "text": " I think it is best if we all know about the fact that these tools exist."}, {"start": 295.47999999999996, "end": 300.44, "text": " If you wish to help us with this endeavor, please make sure to share this with your friends."}, {"start": 300.44, "end": 301.44, "text": " Thank you."}, {"start": 301.44, "end": 303.84, "text": " This episode has been supported by Lambda."}, {"start": 303.84, "end": 308.96, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 308.96, "end": 311.11999999999995, "text": " check out Lambda GPU Cloud."}, {"start": 311.12, "end": 316.08, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 316.08, "end": 319.24, "text": " that they are offering GPU Cloud services as well."}, {"start": 319.24, "end": 326.24, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 326.24, "end": 331.36, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 331.36, "end": 337.0, "text": " And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 337.0, "end": 339.12, "text": " AWS and Azure."}, {"start": 339.12, "end": 344.6, "text": " Make sure to go to lambdalabs.com, slash papers, and sign up for one of their amazing GPU"}, {"start": 344.6, "end": 345.6, "text": " instances today."}, {"start": 345.6, "end": 349.32, "text": " Our thanks to Lambda for helping us make better videos for you."}, {"start": 349.32, "end": 377.84, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=nCpGStnayHk
This Neural Network Learned To Look Around In Real Scenes! (NERF)
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their amazing instrumentation is available here: https://app.wandb.ai/sweep/nerf/reports/NeRF-%E2%80%93-Representing-Scenes-as-Neural-Radiance-Fields-for-View-Synthesis--Vmlldzo3ODIzMA 📝 The paper "#NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis" is available here: http://www.matthewtancik.com/nerf 📝 The paper "Gaussian Material Synthesis" is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. About two years ago, we worked on a neuro-rendering system which would perform light transport on this scene and guess how it would change if we would change the material properties of this test object. It was able to closely match the output of a real light simulation program and it was near instantaneous as it took less than 5 milliseconds instead of the 40 to 60 seconds the light transport algorithm usually requires. This technique went by the name Gaussian material synthesis and the learned quantities were material properties. But this new paper sets out to learn something more difficult and also more general. We are talking about a 5D Neural Radiance Field Representation. So what does this mean exactly? What this means is that we have the three dimensions for location and two for view direction or in short the input is where we are in space and what we are looking at and the resulting image of this view. So here we take a bunch of this input data, learn it and synthesize new previously unseen views of not just the materials in the scene but the entire scene itself. And here we are talking not only digital environments but also real scenes as well. Now that's quite a value proposition so let's see if it can live up to this promise. Wow! So good! Love it! But what is it really that we should be looking at? What makes a good output here? The most challenging part is writing an algorithm that is able to reproduce delicate high frequency details while having temporal coherence. So what does that mean? Well, in simpler words we are looking for sharp and smooth image sequences. Perfectly matte objects are easier to learn here because they look the same from all directions while glossier, more reflective materials are significantly more difficult because they change a great deal as we move our head around and this highly variant information is typically not present in the learned input images. If you read the paper you'll see these referred to as non-lumbarian materials. The paper and the video contains a ton of examples of these view dependent effects to demonstrate that these difficult scenes are handled really well by this technique. Refractions also look great. Now if we define difficulty as things that change a lot when we change our position or view direction a little, not only the non-lumbarian materials are going to give us headaches, occlusions can also be challenging as well. For instance you can see here how well it handles the complex occlusion situation between the ribs of the skeleton here. It also has an understanding of depth and this depth information is so accurate that we can do these nice augmented reality applications where we put a new virtual object in the scene and it correctly determines whether it is in front of or behind the real objects in the scene. End of what these new iPads do with their lidar sensors but without the sensor. As you see this technique smokes the competition. So what do you know? Entire real world scenes can be reproduced from only a few views by using neural networks and the results are just out of this world. Absolutely amazing. What you see here is an instrumentation of this exact paper we have talked about which was made by weights and biases. I think organizing these experiments really showcases the usability of their system. Also weights and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you are an academic or have an open source project you can use their tools for free. What really is as good as it gets. Make sure to visit them through wnbe.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 5.0, "end": 10.44, "text": " About two years ago, we worked on a neuro-rendering system which would perform light transport"}, {"start": 10.44, "end": 15.56, "text": " on this scene and guess how it would change if we would change the material properties"}, {"start": 15.56, "end": 17.2, "text": " of this test object."}, {"start": 17.2, "end": 22.28, "text": " It was able to closely match the output of a real light simulation program and it was"}, {"start": 22.28, "end": 28.72, "text": " near instantaneous as it took less than 5 milliseconds instead of the 40 to 60 seconds"}, {"start": 28.72, "end": 32.0, "text": " the light transport algorithm usually requires."}, {"start": 32.0, "end": 37.28, "text": " This technique went by the name Gaussian material synthesis and the learned quantities were"}, {"start": 37.28, "end": 39.72, "text": " material properties."}, {"start": 39.72, "end": 45.8, "text": " But this new paper sets out to learn something more difficult and also more general."}, {"start": 45.8, "end": 50.44, "text": " We are talking about a 5D Neural Radiance Field Representation."}, {"start": 50.44, "end": 53.0, "text": " So what does this mean exactly?"}, {"start": 53.0, "end": 58.4, "text": " What this means is that we have the three dimensions for location and two for view direction"}, {"start": 58.4, "end": 64.44, "text": " or in short the input is where we are in space and what we are looking at and the resulting"}, {"start": 64.44, "end": 66.28, "text": " image of this view."}, {"start": 66.28, "end": 73.03999999999999, "text": " So here we take a bunch of this input data, learn it and synthesize new previously unseen"}, {"start": 73.03999999999999, "end": 79.16, "text": " views of not just the materials in the scene but the entire scene itself."}, {"start": 79.16, "end": 84.92, "text": " And here we are talking not only digital environments but also real scenes as well."}, {"start": 84.92, "end": 90.2, "text": " Now that's quite a value proposition so let's see if it can live up to this promise."}, {"start": 90.2, "end": 91.76, "text": " Wow!"}, {"start": 91.76, "end": 92.76, "text": " So good!"}, {"start": 92.76, "end": 94.08, "text": " Love it!"}, {"start": 94.08, "end": 96.92, "text": " But what is it really that we should be looking at?"}, {"start": 96.92, "end": 98.96000000000001, "text": " What makes a good output here?"}, {"start": 98.96000000000001, "end": 104.56, "text": " The most challenging part is writing an algorithm that is able to reproduce delicate high frequency"}, {"start": 104.56, "end": 107.8, "text": " details while having temporal coherence."}, {"start": 107.8, "end": 109.6, "text": " So what does that mean?"}, {"start": 109.6, "end": 114.72, "text": " Well, in simpler words we are looking for sharp and smooth image sequences."}, {"start": 114.72, "end": 120.0, "text": " Perfectly matte objects are easier to learn here because they look the same from all directions"}, {"start": 120.0, "end": 125.28, "text": " while glossier, more reflective materials are significantly more difficult because they"}, {"start": 125.28, "end": 131.6, "text": " change a great deal as we move our head around and this highly variant information is typically"}, {"start": 131.6, "end": 134.24, "text": " not present in the learned input images."}, {"start": 134.24, "end": 138.84, "text": " If you read the paper you'll see these referred to as non-lumbarian materials."}, {"start": 138.84, "end": 143.28, "text": " The paper and the video contains a ton of examples of these view dependent effects"}, {"start": 143.28, "end": 148.76, "text": " to demonstrate that these difficult scenes are handled really well by this technique."}, {"start": 148.76, "end": 151.08, "text": " Refractions also look great."}, {"start": 151.08, "end": 156.64, "text": " Now if we define difficulty as things that change a lot when we change our position or"}, {"start": 156.64, "end": 162.92000000000002, "text": " view direction a little, not only the non-lumbarian materials are going to give us headaches, occlusions"}, {"start": 162.92000000000002, "end": 165.12, "text": " can also be challenging as well."}, {"start": 165.12, "end": 169.84, "text": " For instance you can see here how well it handles the complex occlusion situation between"}, {"start": 169.84, "end": 175.8, "text": " the ribs of the skeleton here."}, {"start": 175.8, "end": 181.08, "text": " It also has an understanding of depth and this depth information is so accurate that we"}, {"start": 181.08, "end": 187.48000000000002, "text": " can do these nice augmented reality applications where we put a new virtual object in the scene"}, {"start": 187.48000000000002, "end": 193.24, "text": " and it correctly determines whether it is in front of or behind the real objects in the"}, {"start": 193.24, "end": 194.56, "text": " scene."}, {"start": 194.56, "end": 201.8, "text": " End of what these new iPads do with their lidar sensors but without the sensor."}, {"start": 201.8, "end": 205.24, "text": " As you see this technique smokes the competition."}, {"start": 205.24, "end": 206.56, "text": " So what do you know?"}, {"start": 206.56, "end": 212.04, "text": " Entire real world scenes can be reproduced from only a few views by using neural networks"}, {"start": 212.04, "end": 215.16, "text": " and the results are just out of this world."}, {"start": 215.16, "end": 216.8, "text": " Absolutely amazing."}, {"start": 216.8, "end": 221.28, "text": " What you see here is an instrumentation of this exact paper we have talked about which"}, {"start": 221.28, "end": 223.56, "text": " was made by weights and biases."}, {"start": 223.56, "end": 228.96, "text": " I think organizing these experiments really showcases the usability of their system."}, {"start": 228.96, "end": 233.4, "text": " Also weights and biases provides tools to track your experiments in your deep learning"}, {"start": 233.4, "end": 234.4, "text": " projects."}, {"start": 234.4, "end": 239.16, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 239.16, "end": 245.72, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 245.72, "end": 251.2, "text": " And the best part is that if you are an academic or have an open source project you can use"}, {"start": 251.2, "end": 252.96, "text": " their tools for free."}, {"start": 252.96, "end": 255.52, "text": " What really is as good as it gets."}, {"start": 255.52, "end": 260.88, "text": " Make sure to visit them through wnbe.com slash papers or just click the link in the video"}, {"start": 260.88, "end": 264.12, "text": " description and you can get a free demo today."}, {"start": 264.12, "end": 268.76, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 268.76, "end": 270.08, "text": " better videos for you."}, {"start": 270.08, "end": 291.79999999999995, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=VQgYPv8tb6A
This AI Makes "Audio Deepfakes"!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers Their blog post on #deepfakes is available here: https://www.wandb.com/articles/improving-deepfake-performance-with-data 📝 The paper "Neural Voice Puppetry: Audio-driven Facial Reenactment" and its online demo are available here: Paper: https://justusthies.github.io/posts/neural-voice-puppetry/ Demo - **Update: seems to have been disabled in the meantime, apologies!** : http://kaldir.vc.in.tum.de:9000/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #audiodeepfake #voicedeepfake #deepfake
Your fellow strollers, this is too many papers with this man's name that isn't possible to pronounce. My name is Dr. Karo Ejona Ifehir and indeed it seems that pronouncing my name requires some advanced technology. So, what was this? I promise to tell you in a moment, but to understand what happened here, first let's have a look at this deepfake technique we showcased a few videos ago. As you see, we are at a point where our mouth, head and eye movements are also realistically translated to a chosen target subject and perhaps the most remarkable part of this work was that we don't even need a video of this target person, just one photograph. However, these deepfake techniques mainly help us in transferring video content. So, what about voice synthesis? Is it also as advanced as this technique we are looking at? Well, let's have a look at an example and you can decide for yourself. This is a recent work that goes by the name Tecotron 2 and it performs AI-based voice cloning. All this technique requires is a 5-second sound sample of us and is able to synthesize new sentences in our voice as if we utter these words ourselves. Let's listen to a couple examples. The Norsemen considered the rainbow as a bridge over which the gods passed from Earth to their home in the sky. Take a look at these pages for Cricut Creek Drive. There are several listings for gas station. Here's the forecast for the next four days. Wow, these are truly incredible. The tumble of the voice is very similar and it is able to synthesize sounds and consonants that have to be inferred because they were not heard in the original voice sample. And now let's jump to the next level and use a new technique that takes a sound sample and animates the video footage as if the target subject said it themselves. This technique is called Neural Voice Papetry and even though the voices here are synthesized by this previous Tecotron 2 method that you heard a moment ago, we shouldn't judge this technique by its audio quality, but how well the video follows these given sounds. Let's go. The President of the United States is the head of state and head of government of the United States, indirectly elected to a four-year term by the people through the Electoral College. The office holder leads the executive branch of the federal government and is the commander in chief of the United States Armed Forces. There are currently four living former presidents. If you decide to stay until the end of this video, there will be another fun video sample waiting for you there. Now note that this is not the first technique to achieve results like this, so I can't wait to look under the hood and see what's new here. After processing the incoming audio, the gestures are applied to an intermediate 3D model which is specific to each person since each speaker has their own way of expressing themselves. You can see this intermediate 3D model here, but we are not done yet, we feed it through a neural renderer and what this does is apply this motion to the particular face model shown in the video. You can imagine the intermediate 3D model as a crude mask that models the gestures well, but does not look like the face of anyone where the neural render adapts the mask to our power-gates subject. This includes adapting it to the current resolution, lighting, face position and more, all of which is specific to what is seen in the video. What is even cooler is that this neural rendering part runs in real time. So what do we get from all this? Well, one superior quality, but at the same time it also generalizes to multiple targets. Have a look here. You know, I think we're in a moment of history where probably the most important thing we need to do is to bring the country together and one of the skills that I bring to bear. And the list of great news is not over yet, you can try it yourself. The link is available in the video description. Make sure to leave a comment with your results. To sum up by combining multiple existing techniques, it is important that everyone knows about the fact that we can both perform joint video and audio synthesis for a target subject. This episode has been supported by weights and biases. Here they show you how to use their tool to perform face swapping and improve your model that performs it. Also, weights and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you're an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks to weights and biases for their long standing support and for helping us make better videos for you.
[{"start": 0.0, "end": 12.5, "text": " Your fellow strollers, this is too many papers with this man's name that isn't possible"}, {"start": 12.5, "end": 15.0, "text": " to pronounce."}, {"start": 15.0, "end": 20.28, "text": " My name is Dr. Karo Ejona Ifehir and indeed it seems that pronouncing my name requires"}, {"start": 20.28, "end": 21.88, "text": " some advanced technology."}, {"start": 21.88, "end": 24.16, "text": " So, what was this?"}, {"start": 24.16, "end": 29.400000000000002, "text": " I promise to tell you in a moment, but to understand what happened here, first let's have a look"}, {"start": 29.4, "end": 32.96, "text": " at this deepfake technique we showcased a few videos ago."}, {"start": 32.96, "end": 38.68, "text": " As you see, we are at a point where our mouth, head and eye movements are also realistically"}, {"start": 38.68, "end": 44.26, "text": " translated to a chosen target subject and perhaps the most remarkable part of this work"}, {"start": 44.26, "end": 49.92, "text": " was that we don't even need a video of this target person, just one photograph."}, {"start": 49.92, "end": 55.8, "text": " However, these deepfake techniques mainly help us in transferring video content."}, {"start": 55.8, "end": 58.72, "text": " So, what about voice synthesis?"}, {"start": 58.72, "end": 62.04, "text": " Is it also as advanced as this technique we are looking at?"}, {"start": 62.04, "end": 66.76, "text": " Well, let's have a look at an example and you can decide for yourself."}, {"start": 66.76, "end": 72.28, "text": " This is a recent work that goes by the name Tecotron 2 and it performs AI-based voice"}, {"start": 72.28, "end": 73.52, "text": " cloning."}, {"start": 73.52, "end": 78.44, "text": " All this technique requires is a 5-second sound sample of us and is able to synthesize"}, {"start": 78.44, "end": 83.84, "text": " new sentences in our voice as if we utter these words ourselves."}, {"start": 83.84, "end": 86.0, "text": " Let's listen to a couple examples."}, {"start": 86.0, "end": 89.84, "text": " The Norsemen considered the rainbow as a bridge over which the gods passed from Earth to"}, {"start": 89.84, "end": 92.28, "text": " their home in the sky."}, {"start": 92.28, "end": 96.12, "text": " Take a look at these pages for Cricut Creek Drive."}, {"start": 96.12, "end": 100.08, "text": " There are several listings for gas station."}, {"start": 100.08, "end": 103.08, "text": " Here's the forecast for the next four days."}, {"start": 103.08, "end": 106.76, "text": " Wow, these are truly incredible."}, {"start": 106.76, "end": 112.56, "text": " The tumble of the voice is very similar and it is able to synthesize sounds and consonants"}, {"start": 112.56, "end": 117.44, "text": " that have to be inferred because they were not heard in the original voice sample."}, {"start": 117.44, "end": 123.32000000000001, "text": " And now let's jump to the next level and use a new technique that takes a sound sample"}, {"start": 123.32000000000001, "end": 128.48, "text": " and animates the video footage as if the target subject said it themselves."}, {"start": 128.48, "end": 134.12, "text": " This technique is called Neural Voice Papetry and even though the voices here are synthesized"}, {"start": 134.12, "end": 139.0, "text": " by this previous Tecotron 2 method that you heard a moment ago, we shouldn't judge this"}, {"start": 139.0, "end": 145.36, "text": " technique by its audio quality, but how well the video follows these given sounds."}, {"start": 145.36, "end": 146.52, "text": " Let's go."}, {"start": 146.52, "end": 150.4, "text": " The President of the United States is the head of state and head of government of the United"}, {"start": 150.4, "end": 155.56, "text": " States, indirectly elected to a four-year term by the people through the Electoral College."}, {"start": 155.56, "end": 159.8, "text": " The office holder leads the executive branch of the federal government and is the commander"}, {"start": 159.8, "end": 162.68, "text": " in chief of the United States Armed Forces."}, {"start": 162.68, "end": 166.28, "text": " There are currently four living former presidents."}, {"start": 166.28, "end": 170.92, "text": " If you decide to stay until the end of this video, there will be another fun video sample"}, {"start": 170.92, "end": 172.6, "text": " waiting for you there."}, {"start": 172.6, "end": 177.44, "text": " Now note that this is not the first technique to achieve results like this, so I can't"}, {"start": 177.44, "end": 181.36, "text": " wait to look under the hood and see what's new here."}, {"start": 181.36, "end": 186.96, "text": " After processing the incoming audio, the gestures are applied to an intermediate 3D model"}, {"start": 186.96, "end": 193.28, "text": " which is specific to each person since each speaker has their own way of expressing themselves."}, {"start": 193.28, "end": 198.96, "text": " You can see this intermediate 3D model here, but we are not done yet, we feed it through"}, {"start": 198.96, "end": 204.8, "text": " a neural renderer and what this does is apply this motion to the particular face model"}, {"start": 204.8, "end": 206.36, "text": " shown in the video."}, {"start": 206.36, "end": 212.6, "text": " You can imagine the intermediate 3D model as a crude mask that models the gestures well,"}, {"start": 212.6, "end": 218.24, "text": " but does not look like the face of anyone where the neural render adapts the mask to our"}, {"start": 218.24, "end": 219.88, "text": " power-gates subject."}, {"start": 219.88, "end": 226.35999999999999, "text": " This includes adapting it to the current resolution, lighting, face position and more, all of"}, {"start": 226.35999999999999, "end": 230.0, "text": " which is specific to what is seen in the video."}, {"start": 230.0, "end": 235.72, "text": " What is even cooler is that this neural rendering part runs in real time."}, {"start": 235.72, "end": 237.92, "text": " So what do we get from all this?"}, {"start": 237.92, "end": 245.6, "text": " Well, one superior quality, but at the same time it also generalizes to multiple targets."}, {"start": 245.6, "end": 246.6, "text": " Have a look here."}, {"start": 246.6, "end": 252.12, "text": " You know, I think we're in a moment of history where probably the most important thing we"}, {"start": 252.12, "end": 259.08, "text": " need to do is to bring the country together and one of the skills that I bring to bear."}, {"start": 259.08, "end": 262.68, "text": " And the list of great news is not over yet, you can try it yourself."}, {"start": 262.68, "end": 265.48, "text": " The link is available in the video description."}, {"start": 265.48, "end": 267.71999999999997, "text": " Make sure to leave a comment with your results."}, {"start": 267.71999999999997, "end": 272.88, "text": " To sum up by combining multiple existing techniques, it is important that everyone knows"}, {"start": 272.88, "end": 278.36, "text": " about the fact that we can both perform joint video and audio synthesis for a target"}, {"start": 278.36, "end": 279.6, "text": " subject."}, {"start": 279.6, "end": 282.76, "text": " This episode has been supported by weights and biases."}, {"start": 282.76, "end": 288.56, "text": " Here they show you how to use their tool to perform face swapping and improve your model"}, {"start": 288.56, "end": 289.92, "text": " that performs it."}, {"start": 289.92, "end": 294.44, "text": " Also, weights and biases provides tools to track your experiments in your deep learning"}, {"start": 294.44, "end": 295.44, "text": " projects."}, {"start": 295.44, "end": 300.0, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 300.0, "end": 306.64, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 306.64, "end": 312.12, "text": " And the best part is that if you're an academic or have an open source project, you can use"}, {"start": 312.12, "end": 313.8, "text": " their tools for free."}, {"start": 313.8, "end": 316.44, "text": " It really is as good as it gets."}, {"start": 316.44, "end": 321.84, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 321.84, "end": 325.32, "text": " description and you can get a free demo today."}, {"start": 325.32, "end": 330.15999999999997, "text": " Thanks to weights and biases for their long standing support and for helping us make"}, {"start": 330.16, "end": 360.12, "text": " better videos for you."}]
Two Minute Papers
https://www.youtube.com/watch?v=higGxGmwDbs
Muscle Simulation...Now In Real Time! 💪
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "VIPER: Volume Invariant Position-based Elastic Rods" is available here: https://arxiv.org/abs/1906.05260 https://github.com/vcg-uvic/viper ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Kato Jonaifahir. We have showcased this paper just a few months ago, which was about creating virtual characters with a skeletal system, adding more than 300 muscles and teaching them to use these muscles to kick, jump, move around and perform other realistic human movements. It came with really cool insights as it could portray how increasing the amount of weight to be lifted changes what muscles are being trained during a workout. These agents also learn to jump really high and you can see a drastic difference between the movement required for a mediocre jump and an amazing one. Beyond that, it showed us how these virtual characters would move if they were hamstrung by bone deformities, a stiff ankle or muscle deficiencies and watched them learn to walk despite these setbacks. We could even have a look at the improvements after a virtual surgery takes place. So now, how about an even more elaborate technique that focuses more on the muscle simulation part? The ropes here are simulated in a way that the only interesting property of the particles holding them together is position. Cossarat rod simulations are an improvement because they also take into consideration the orientation of the particles and hands can simulate twists as well. And this new technique is called Viper and adds a scale property to these particles and hands takes into consideration stretching and compression. What does that mean? Well, it means that this can be used for a lot of muscle related simulation problems that you will see in a moment. However, before that, an important part is inserting these objects into our simulations. The cool thing is that we don't need to get an artist to break up these surfaces into muscle fibers. And that would not only be too laborious, but of course would also require a great deal of anatomical knowledge. Instead, this technique does all this automatically a process that the authors call Viparization. So, in goes the geometry and outcomes a nice muscle model. This really opens up a world of really cool applications. For instance, one such application is muscle movement simulation. When attaching the muscles to bones as we move the character, the muscles move and contract accurately. Two, it can also perform muscle growth simulations. And three, we get more accurate soft body physics. Or in other words, we can animate gooey characters like this octopus. Okay, that all sounds great, but how expensive is this? Do we have to wait a few seconds to minutes to get this? No, no, not at all. This technique is really efficient and runs in milliseconds so we can throw in a couple more objects. And by couple, a computer graphics researcher always means a couple dozen more, of course. And in the meantime, let's look carefully at the simulation timings. It starts from around 8 to 9 milliseconds per frame and with all these octopi, we are still hovering around 10 milliseconds per frame. That's 100 frames per second, which means that the algorithm scales with the complexity of these scenes really well. This is one of those rare papers that is written both very precisely and it is absolutely beautiful. Make sure to have a look in the video description. The source code of the project is also available. And this, I hope, will get even more realistic characters with real muscle models in our computer games and real time applications. What a time to be alive. This episode has been supported by Lambda. If you're a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. And thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Kato Jonaifahir."}, {"start": 4.8, "end": 9.8, "text": " We have showcased this paper just a few months ago, which was about creating virtual characters"}, {"start": 9.8, "end": 15.4, "text": " with a skeletal system, adding more than 300 muscles and teaching them to use these muscles"}, {"start": 15.4, "end": 21.52, "text": " to kick, jump, move around and perform other realistic human movements."}, {"start": 21.52, "end": 26.6, "text": " It came with really cool insights as it could portray how increasing the amount of weight"}, {"start": 26.6, "end": 31.560000000000002, "text": " to be lifted changes what muscles are being trained during a workout."}, {"start": 31.560000000000002, "end": 36.2, "text": " These agents also learn to jump really high and you can see a drastic difference between"}, {"start": 36.2, "end": 41.400000000000006, "text": " the movement required for a mediocre jump and an amazing one."}, {"start": 41.400000000000006, "end": 46.24, "text": " Beyond that, it showed us how these virtual characters would move if they were hamstrung"}, {"start": 46.24, "end": 52.68000000000001, "text": " by bone deformities, a stiff ankle or muscle deficiencies and watched them learn to walk"}, {"start": 52.68000000000001, "end": 54.44, "text": " despite these setbacks."}, {"start": 54.44, "end": 59.48, "text": " We could even have a look at the improvements after a virtual surgery takes place."}, {"start": 59.48, "end": 65.12, "text": " So now, how about an even more elaborate technique that focuses more on the muscle simulation"}, {"start": 65.12, "end": 66.32, "text": " part?"}, {"start": 66.32, "end": 71.03999999999999, "text": " The ropes here are simulated in a way that the only interesting property of the particles"}, {"start": 71.03999999999999, "end": 74.44, "text": " holding them together is position."}, {"start": 74.44, "end": 78.84, "text": " Cossarat rod simulations are an improvement because they also take into consideration"}, {"start": 78.84, "end": 84.4, "text": " the orientation of the particles and hands can simulate twists as well."}, {"start": 84.4, "end": 89.96000000000001, "text": " And this new technique is called Viper and adds a scale property to these particles and"}, {"start": 89.96000000000001, "end": 94.72, "text": " hands takes into consideration stretching and compression."}, {"start": 94.72, "end": 95.72, "text": " What does that mean?"}, {"start": 95.72, "end": 100.52000000000001, "text": " Well, it means that this can be used for a lot of muscle related simulation problems that"}, {"start": 100.52000000000001, "end": 102.28, "text": " you will see in a moment."}, {"start": 102.28, "end": 108.4, "text": " However, before that, an important part is inserting these objects into our simulations."}, {"start": 108.4, "end": 112.76, "text": " The cool thing is that we don't need to get an artist to break up these surfaces into"}, {"start": 112.76, "end": 114.36000000000001, "text": " muscle fibers."}, {"start": 114.36, "end": 119.44, "text": " And that would not only be too laborious, but of course would also require a great deal"}, {"start": 119.44, "end": 121.2, "text": " of anatomical knowledge."}, {"start": 121.2, "end": 128.04, "text": " Instead, this technique does all this automatically a process that the authors call Viparization."}, {"start": 128.04, "end": 133.8, "text": " So, in goes the geometry and outcomes a nice muscle model."}, {"start": 133.8, "end": 137.68, "text": " This really opens up a world of really cool applications."}, {"start": 137.68, "end": 141.68, "text": " For instance, one such application is muscle movement simulation."}, {"start": 141.68, "end": 147.56, "text": " When attaching the muscles to bones as we move the character, the muscles move and contract"}, {"start": 147.56, "end": 149.56, "text": " accurately."}, {"start": 149.56, "end": 161.52, "text": " Two, it can also perform muscle growth simulations."}, {"start": 161.52, "end": 165.36, "text": " And three, we get more accurate soft body physics."}, {"start": 165.36, "end": 170.24, "text": " Or in other words, we can animate gooey characters like this octopus."}, {"start": 170.24, "end": 175.48000000000002, "text": " Okay, that all sounds great, but how expensive is this?"}, {"start": 175.48000000000002, "end": 179.16, "text": " Do we have to wait a few seconds to minutes to get this?"}, {"start": 179.16, "end": 181.20000000000002, "text": " No, no, not at all."}, {"start": 181.20000000000002, "end": 186.08, "text": " This technique is really efficient and runs in milliseconds so we can throw in a couple"}, {"start": 186.08, "end": 187.48000000000002, "text": " more objects."}, {"start": 187.48000000000002, "end": 193.56, "text": " And by couple, a computer graphics researcher always means a couple dozen more, of course."}, {"start": 193.56, "end": 198.08, "text": " And in the meantime, let's look carefully at the simulation timings."}, {"start": 198.08, "end": 203.24, "text": " It starts from around 8 to 9 milliseconds per frame and with all these octopi, we are"}, {"start": 203.24, "end": 206.96, "text": " still hovering around 10 milliseconds per frame."}, {"start": 206.96, "end": 211.36, "text": " That's 100 frames per second, which means that the algorithm scales with the complexity"}, {"start": 211.36, "end": 214.12, "text": " of these scenes really well."}, {"start": 214.12, "end": 220.12, "text": " This is one of those rare papers that is written both very precisely and it is absolutely"}, {"start": 220.12, "end": 221.36, "text": " beautiful."}, {"start": 221.36, "end": 223.56, "text": " Make sure to have a look in the video description."}, {"start": 223.56, "end": 226.48000000000002, "text": " The source code of the project is also available."}, {"start": 226.48, "end": 231.79999999999998, "text": " And this, I hope, will get even more realistic characters with real muscle models in our computer"}, {"start": 231.79999999999998, "end": 234.76, "text": " games and real time applications."}, {"start": 234.76, "end": 236.48, "text": " What a time to be alive."}, {"start": 236.48, "end": 238.88, "text": " This episode has been supported by Lambda."}, {"start": 238.88, "end": 243.95999999999998, "text": " If you're a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 243.95999999999998, "end": 246.51999999999998, "text": " check out Lambda GPU Cloud."}, {"start": 246.51999999999998, "end": 251.2, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 251.2, "end": 254.39999999999998, "text": " that they are offering GPU Cloud services as well."}, {"start": 254.4, "end": 261.56, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 261.56, "end": 266.6, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 266.6, "end": 272.04, "text": " And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 272.04, "end": 274.16, "text": " AWS and Azure."}, {"start": 274.16, "end": 279.56, "text": " Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU"}, {"start": 279.56, "end": 280.56, "text": " instances today."}, {"start": 280.56, "end": 284.56, "text": " And thanks to Lambda for helping us make better videos for you."}, {"start": 284.56, "end": 313.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-O7ZJ-AJGRE
Is Visualizing Light Waves Possible? ☀️
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers Their blog post is available here: https://www.wandb.com/articles/intro-to-keras-with-weights-biases 📝 The paper "Progressive Transient Photon Beams" is available here: http://webdiis.unizar.es/~juliom/pubs/2019CGF-PTPB/ 📝 The paper "Femto-Photography: Capturing and Visualizing the Propagation of Light" is available here: http://giga.cps.unizar.es/~ajarabo/pubs/femtoSIG2013/ My light transport course is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ The paper with the image of the shown caustics is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ Erratum: people see a "slightly" younger, not older version of you. Apologies! 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. Have you heard the saying that whenever we look into the mirror, strictly speaking, we don't really see ourselves, but we see ourselves from the past. From a few nanoseconds ago. Is that true? If so, why? This is indeed true, and the reason for this is that the speed of light is finite and it has to travel back from the mirror to our eyes. If you feel that this is really hard to imagine, you are in luck because a legendary paper from 2013 by the name FEMPT OF AUTOGRAPHY capture this effect. I would say it is safe to start holding onto your papers from this point basically until the end of this video. Here you can see a super high speed camera capturing how a wave of light propagates through a bottle, most makes it through, and some gets absorbed by the bottle cap. But this means that this mirror example we talked about shall not only be a thought experiment, but we can even witness it ourselves. Yep, toy first, mirror image second. Approximately a nanosecond apart. So if someone says that you look old, you have an excellent excuse now. The first author of this work was Andra Svelton, who worked on this at MIT, and he is now a professor leading an incredible research group at the University of Wisconsin Medicine. But wait, since it is possible to create light transport simulations in which we simulate the path of many, many millions of light rays to create a beautiful photo-realistic image, Adrienne Harabo thought that he would create a simulator that wouldn't just give us the final image, but he would show us the propagation of light in a digital simulated environment. As you see here, with this, we can create even crazier experiments because we are not limited to the real world light conditions and limitations of the camera. The beauty of this technique is just unparalleled. He calls this method transient rendering, and this particular work is tailored to excel at rendering caustic patterns. A caustic is a beautiful phenomenon in nature where curved surfaces reflect or refract light thereby concentrating it to a relatively small area. I hope that you are not surprised when I say that this is the favorite phenomenon of most light transport researchers. Now, we're about these caustics. We need a super efficient technique to be able to pull this off. For instance, back in 2013, we showcased a fantastic scene made by Vlad Miller that was a nightmare to compute and it took a community effort and more than a month to accomplish it. Beyond that, the transient renderer only uses very little memory builds on the Fordon Beams technique we talked about a few videos ago and always arrives to a correct solution given enough time. Bravo! And we can do all this through the power of science. Isn't it incredible? And if you feel a little stranded at home and are yearning to learn more about light transport, I held a master-level course on light transport simulations at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. So, the course is now available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full light simulation program from scratch there and learn about physics, the world around us, and more. This episode has been supported by weights and biases. In this post, they show you how to build and track a simple neural network in Keras to recognize characters from the Simpson series. You can even fork this piece of code and start right away. Also, weights and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you're an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 7.6000000000000005, "text": " Have you heard the saying that whenever we look into the mirror,"}, {"start": 7.6000000000000005, "end": 10.72, "text": " strictly speaking, we don't really see ourselves,"}, {"start": 10.72, "end": 13.68, "text": " but we see ourselves from the past."}, {"start": 13.68, "end": 15.84, "text": " From a few nanoseconds ago."}, {"start": 15.84, "end": 17.12, "text": " Is that true?"}, {"start": 17.12, "end": 18.88, "text": " If so, why?"}, {"start": 18.88, "end": 23.68, "text": " This is indeed true, and the reason for this is that the speed of light is finite"}, {"start": 23.68, "end": 26.96, "text": " and it has to travel back from the mirror to our eyes."}, {"start": 26.96, "end": 30.64, "text": " If you feel that this is really hard to imagine, you are in luck"}, {"start": 30.64, "end": 35.68, "text": " because a legendary paper from 2013 by the name FEMPT OF AUTOGRAPHY"}, {"start": 35.68, "end": 37.04, "text": " capture this effect."}, {"start": 37.04, "end": 41.28, "text": " I would say it is safe to start holding onto your papers from this point"}, {"start": 41.28, "end": 43.84, "text": " basically until the end of this video."}, {"start": 43.84, "end": 47.44, "text": " Here you can see a super high speed camera capturing"}, {"start": 47.44, "end": 50.72, "text": " how a wave of light propagates through a bottle,"}, {"start": 50.72, "end": 55.36, "text": " most makes it through, and some gets absorbed by the bottle cap."}, {"start": 55.36, "end": 58.4, "text": " But this means that this mirror example we talked about"}, {"start": 58.4, "end": 60.72, "text": " shall not only be a thought experiment,"}, {"start": 60.72, "end": 63.6, "text": " but we can even witness it ourselves."}, {"start": 63.6, "end": 67.12, "text": " Yep, toy first, mirror image second."}, {"start": 67.12, "end": 69.84, "text": " Approximately a nanosecond apart."}, {"start": 69.84, "end": 72.24, "text": " So if someone says that you look old,"}, {"start": 72.24, "end": 74.64, "text": " you have an excellent excuse now."}, {"start": 74.64, "end": 77.36, "text": " The first author of this work was Andra Svelton,"}, {"start": 77.36, "end": 79.28, "text": " who worked on this at MIT,"}, {"start": 79.28, "end": 83.12, "text": " and he is now a professor leading an incredible research group"}, {"start": 83.12, "end": 85.68, "text": " at the University of Wisconsin Medicine."}, {"start": 85.68, "end": 89.36, "text": " But wait, since it is possible to create light transport simulations"}, {"start": 89.36, "end": 93.52000000000001, "text": " in which we simulate the path of many, many millions of light rays"}, {"start": 93.52000000000001, "end": 96.24000000000001, "text": " to create a beautiful photo-realistic image,"}, {"start": 96.24000000000001, "end": 99.92, "text": " Adrienne Harabo thought that he would create a simulator"}, {"start": 99.92, "end": 102.48, "text": " that wouldn't just give us the final image,"}, {"start": 102.48, "end": 105.60000000000001, "text": " but he would show us the propagation of light"}, {"start": 105.60000000000001, "end": 108.32000000000001, "text": " in a digital simulated environment."}, {"start": 108.32000000000001, "end": 112.4, "text": " As you see here, with this, we can create even crazier experiments"}, {"start": 112.4, "end": 115.60000000000001, "text": " because we are not limited to the real world light conditions"}, {"start": 115.60000000000001, "end": 119.04, "text": " and limitations of the camera."}, {"start": 119.04, "end": 122.48, "text": " The beauty of this technique is just unparalleled."}, {"start": 122.48, "end": 124.88000000000001, "text": " He calls this method transient rendering,"}, {"start": 124.88000000000001, "end": 127.68, "text": " and this particular work is tailored to excel"}, {"start": 127.68, "end": 130.0, "text": " at rendering caustic patterns."}, {"start": 130.0, "end": 132.88, "text": " A caustic is a beautiful phenomenon in nature"}, {"start": 132.88, "end": 136.24, "text": " where curved surfaces reflect or refract light"}, {"start": 136.24, "end": 140.0, "text": " thereby concentrating it to a relatively small area."}, {"start": 140.0, "end": 144.32, "text": " I hope that you are not surprised when I say that this is the favorite phenomenon"}, {"start": 144.32, "end": 147.04, "text": " of most light transport researchers."}, {"start": 147.04, "end": 149.2, "text": " Now, we're about these caustics."}, {"start": 149.2, "end": 152.72, "text": " We need a super efficient technique to be able to pull this off."}, {"start": 152.72, "end": 157.2, "text": " For instance, back in 2013, we showcased a fantastic scene"}, {"start": 157.2, "end": 160.72, "text": " made by Vlad Miller that was a nightmare to compute"}, {"start": 160.72, "end": 165.36, "text": " and it took a community effort and more than a month to accomplish it."}, {"start": 165.36, "end": 169.6, "text": " Beyond that, the transient renderer only uses very little memory"}, {"start": 169.6, "end": 173.51999999999998, "text": " builds on the Fordon Beams technique we talked about a few videos ago"}, {"start": 173.51999999999998, "end": 178.16, "text": " and always arrives to a correct solution given enough time."}, {"start": 178.16, "end": 179.35999999999999, "text": " Bravo!"}, {"start": 179.35999999999999, "end": 183.35999999999999, "text": " And we can do all this through the power of science."}, {"start": 183.35999999999999, "end": 185.44, "text": " Isn't it incredible?"}, {"start": 185.44, "end": 187.84, "text": " And if you feel a little stranded at home"}, {"start": 187.84, "end": 190.79999999999998, "text": " and are yearning to learn more about light transport,"}, {"start": 190.79999999999998, "end": 194.32, "text": " I held a master-level course on light transport simulations"}, {"start": 194.32, "end": 196.72, "text": " at the Technical University of Vienna."}, {"start": 196.72, "end": 200.72, "text": " Since I was always teaching it to a handful of motivated students,"}, {"start": 200.72, "end": 203.92, "text": " I thought that the teachings shouldn't only be available"}, {"start": 203.92, "end": 207.36, "text": " for the privileged few who can afford a college education,"}, {"start": 207.36, "end": 210.72, "text": " but the teachings should be available for everyone."}, {"start": 210.72, "end": 214.72, "text": " So, the course is now available free of charge for everyone,"}, {"start": 214.72, "end": 218.32, "text": " no strings attached, so make sure to click the link in the video description"}, {"start": 218.32, "end": 219.44, "text": " to get started."}, {"start": 219.44, "end": 222.72, "text": " We write a full light simulation program from scratch there"}, {"start": 222.72, "end": 226.48, "text": " and learn about physics, the world around us, and more."}, {"start": 226.48, "end": 230.07999999999998, "text": " This episode has been supported by weights and biases."}, {"start": 230.07999999999998, "end": 233.28, "text": " In this post, they show you how to build and track"}, {"start": 233.28, "end": 236.64, "text": " a simple neural network in Keras to recognize characters"}, {"start": 236.64, "end": 238.16, "text": " from the Simpson series."}, {"start": 238.16, "end": 241.92, "text": " You can even fork this piece of code and start right away."}, {"start": 241.92, "end": 245.51999999999998, "text": " Also, weights and biases provide tools to track your experiments"}, {"start": 245.51999999999998, "end": 247.12, "text": " in your deep learning projects."}, {"start": 247.12, "end": 250.79999999999998, "text": " Their system is designed to save you a ton of time and money"}, {"start": 250.79999999999998, "end": 253.92, "text": " and it is actively used in projects at prestigious labs,"}, {"start": 253.92, "end": 258.32, "text": " such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 258.32, "end": 261.52, "text": " And the best part is that if you're an academic"}, {"start": 261.52, "end": 265.44, "text": " or have an open source project, you can use their tools for free."}, {"start": 265.44, "end": 267.91999999999996, "text": " It really is as good as it gets."}, {"start": 267.91999999999996, "end": 271.76, "text": " Make sure to visit them through wnb.com slash papers"}, {"start": 271.76, "end": 274.47999999999996, "text": " or just click the link in the video description"}, {"start": 274.47999999999996, "end": 276.8, "text": " and you can get a free demo today."}, {"start": 276.8, "end": 280.32, "text": " Our thanks to weights and biases for their long-standing support"}, {"start": 280.32, "end": 283.28, "text": " and for helping us make better videos for you."}, {"start": 283.28, "end": 285.52, "text": " Thanks for watching and for your generous support"}, {"start": 285.52, "end": 315.44, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=mUfJOQKdtAk
Everybody Can Make Deepfakes Now!
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers Their blog post is available here: https://www.wandb.com/articles/hyperparameter-tuning-as-easy-as-1-2-3 📝 The paper "First Order Motion Model for Image Animation" and its source code are available here: - Paper: https://aliaksandrsiarohin.github.io/first-order-model-website/ - Colab notebook: https://colab.research.google.com/github/AliaksandrSiarohin/first-order-model/blob/master/demo.ipynb Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepFake
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karaj Zonai-Fehir. It is important for you to know that everybody can make deepfakes now. You can turn your head around, mouth movements are looking great, and eye movements are also translated into the target footage. So of course, as we always say, two more papers down the line and it will be even better and cheaper than this. As you see, some papers are so well done and are so clear that they just speak for themselves. This is one of them. To use this technique, all you need to do is record a video of yourself, add just one image of the target subject, run this learning based algorithm, and there you go. If you stay until the end of this video, you will see even more people introducing themselves as me. As noted, many important gestures are being translated, such as head, mouth, and eye movement, but what's even better is that even full body movement works. Absolutely incredible. Now there are plenty of techniques out there that can create deepfakes, many of which we have talked about in this series, so what sets this one apart? Well, one, most previous algorithms required additional information, for instance, facial landmarks or a pose estimation of the target subject. This one requires no knowledge of the image. As a result, this technique becomes so much more general. We can create high quality deepfakes with just one photo of the target subject, make ourselves dance like a professional, and what's more, hold on to your papers because it also works on non-humanoid and cartoon models, and even that's not all, we can even synthesize an animation of a robot arm by using another one as a driving sequence. So why is it that it doesn't need all this additional information? Well, if we look under the hood, we see that it is a neural network based method that generates all this information by itself. It identifies what kind of movements and transformations are taking place in our driving video. You can see that the learned key points here follow the motion of the videos really well. Now, we pack up all this information and send it over to the generator to warp the target image appropriately, taking into consideration possible occlusions that may occur. This means that some parts of the image may now be uncovered where we don't know what the background should look like. Normally, we will do this by hand with an image-inpainting technique, for instance, you see the legendary patchmatch algorithm here that does it, however, in this case, the neural network does it automatically by itself. If you are seeking for flaws in the output, these will be important regions to look at. And it not only requires less information than previous techniques, but it also outperforms them significantly. Yes, there is still room to improve this. For instance, the sudden head rotation here seems to generate an excessive amount of visual artifacts. The source code and even an example colab notebook is available, I think it is one of the most accessible papers in this area. Want me south and make sure to have a look in the video description and try to run your own experiments. Let me know in the comments how they went or feel free to drop by at our Discord server where all of you fellow scholars are welcome to discuss ideas and learn together in a kind and respectful environment. The link is available in the video description, it is completely free and if you have joined, make sure to leave a short introduction. Now, of course, beyond the many amazing use cases of this in reviving deceased actors, creating beautiful visual art, redubbing movies and more, unfortunately, there are people around the world who are rubbing their palms together in excitement to use this to their advantage. So, you may ask why make these videos on deepfakes? Why spread this knowledge, especially now with the source codes? Well, I think step number one is to make sure to inform the public that these deepfakes can now be created quickly and inexpensively and they don't require a trained scientist anymore. If this can be done, it is of utmost importance that we all know about it. Then, beyond that, step number two, as a service to the public, I attend to EU and NATO conferences and inform key political and military decision makers about the existence and details of these techniques to make sure that they also know about these and using that knowledge they can make better decisions for us. You see me doing it here. And again, you see this technique in action here to demonstrate that it works really well for video footage in the world. Note that these talks and consultations all happen free of charge and if they keep inviting me, I'll keep showing up to help with this in the future as a service to the public. The cool thing is that later, over dinner, they tend to come back to me with a summary of their understanding of the situation and I highly appreciate the fact that they are open to what we scientists have to say. And now, please enjoy the promised footage. Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fahir. It is important for you to know that everybody can make deepfakes now. You can turn your head around, mouth movements are looking great, and eye movements are also translated into the target footage. And of course, as we always say, two more papers down the line and it will be even better and cheaper than this. This episode has been supported by weights and biases. Here, they show you how you can use sweeps, their tool to search through high-dimensional parameter spaces and find the best performing model. Weight and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 12.36, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karaj Zonai-Fehir."}, {"start": 12.36, "end": 16.6, "text": " It is important for you to know that everybody can make deepfakes now."}, {"start": 16.6, "end": 23.34, "text": " You can turn your head around, mouth movements are looking great, and eye movements are also"}, {"start": 23.34, "end": 27.28, "text": " translated into the target footage."}, {"start": 27.28, "end": 32.18, "text": " So of course, as we always say, two more papers down the line and it will be even better"}, {"start": 32.18, "end": 35.3, "text": " and cheaper than this."}, {"start": 35.3, "end": 41.88, "text": " As you see, some papers are so well done and are so clear that they just speak for themselves."}, {"start": 41.88, "end": 43.34, "text": " This is one of them."}, {"start": 43.34, "end": 48.78, "text": " To use this technique, all you need to do is record a video of yourself, add just one"}, {"start": 48.78, "end": 54.3, "text": " image of the target subject, run this learning based algorithm, and there you go."}, {"start": 54.3, "end": 59.379999999999995, "text": " If you stay until the end of this video, you will see even more people introducing themselves"}, {"start": 59.379999999999995, "end": 60.879999999999995, "text": " as me."}, {"start": 60.879999999999995, "end": 67.34, "text": " As noted, many important gestures are being translated, such as head, mouth, and eye movement,"}, {"start": 67.34, "end": 72.66, "text": " but what's even better is that even full body movement works."}, {"start": 72.66, "end": 75.1, "text": " Absolutely incredible."}, {"start": 75.1, "end": 79.66, "text": " Now there are plenty of techniques out there that can create deepfakes, many of which we"}, {"start": 79.66, "end": 84.46, "text": " have talked about in this series, so what sets this one apart?"}, {"start": 84.46, "end": 90.46, "text": " Well, one, most previous algorithms required additional information, for instance, facial"}, {"start": 90.46, "end": 94.38, "text": " landmarks or a pose estimation of the target subject."}, {"start": 94.38, "end": 97.38, "text": " This one requires no knowledge of the image."}, {"start": 97.38, "end": 100.86, "text": " As a result, this technique becomes so much more general."}, {"start": 100.86, "end": 105.94, "text": " We can create high quality deepfakes with just one photo of the target subject, make"}, {"start": 105.94, "end": 111.94, "text": " ourselves dance like a professional, and what's more, hold on to your papers because it also"}, {"start": 111.94, "end": 118.34, "text": " works on non-humanoid and cartoon models, and even that's not all, we can even synthesize"}, {"start": 118.34, "end": 124.74, "text": " an animation of a robot arm by using another one as a driving sequence."}, {"start": 124.74, "end": 128.78, "text": " So why is it that it doesn't need all this additional information?"}, {"start": 128.78, "end": 133.9, "text": " Well, if we look under the hood, we see that it is a neural network based method that"}, {"start": 133.9, "end": 137.46, "text": " generates all this information by itself."}, {"start": 137.46, "end": 142.70000000000002, "text": " It identifies what kind of movements and transformations are taking place in our driving video."}, {"start": 142.70000000000002, "end": 148.70000000000002, "text": " You can see that the learned key points here follow the motion of the videos really well."}, {"start": 148.70000000000002, "end": 155.82, "text": " Now, we pack up all this information and send it over to the generator to warp the target"}, {"start": 155.82, "end": 161.5, "text": " image appropriately, taking into consideration possible occlusions that may occur."}, {"start": 161.5, "end": 166.38, "text": " This means that some parts of the image may now be uncovered where we don't know what"}, {"start": 166.38, "end": 168.1, "text": " the background should look like."}, {"start": 168.1, "end": 173.34, "text": " Normally, we will do this by hand with an image-inpainting technique, for instance, you see"}, {"start": 173.34, "end": 178.66, "text": " the legendary patchmatch algorithm here that does it, however, in this case, the neural"}, {"start": 178.66, "end": 182.34, "text": " network does it automatically by itself."}, {"start": 182.34, "end": 187.3, "text": " If you are seeking for flaws in the output, these will be important regions to look at."}, {"start": 187.3, "end": 192.98000000000002, "text": " And it not only requires less information than previous techniques, but it also outperforms"}, {"start": 192.98000000000002, "end": 195.3, "text": " them significantly."}, {"start": 195.3, "end": 198.94, "text": " Yes, there is still room to improve this."}, {"start": 198.94, "end": 203.94, "text": " For instance, the sudden head rotation here seems to generate an excessive amount of visual"}, {"start": 203.94, "end": 205.26000000000002, "text": " artifacts."}, {"start": 205.26000000000002, "end": 210.98000000000002, "text": " The source code and even an example colab notebook is available, I think it is one of the most"}, {"start": 210.98000000000002, "end": 213.5, "text": " accessible papers in this area."}, {"start": 213.5, "end": 217.86, "text": " Want me south and make sure to have a look in the video description and try to run your"}, {"start": 217.86, "end": 219.26, "text": " own experiments."}, {"start": 219.26, "end": 224.3, "text": " Let me know in the comments how they went or feel free to drop by at our Discord server"}, {"start": 224.3, "end": 229.86, "text": " where all of you fellow scholars are welcome to discuss ideas and learn together in a kind"}, {"start": 229.86, "end": 231.5, "text": " and respectful environment."}, {"start": 231.5, "end": 236.34, "text": " The link is available in the video description, it is completely free and if you have joined,"}, {"start": 236.34, "end": 238.34, "text": " make sure to leave a short introduction."}, {"start": 238.34, "end": 244.22, "text": " Now, of course, beyond the many amazing use cases of this in reviving deceased actors,"}, {"start": 244.22, "end": 249.98000000000002, "text": " creating beautiful visual art, redubbing movies and more, unfortunately, there are people"}, {"start": 249.98000000000002, "end": 254.34, "text": " around the world who are rubbing their palms together in excitement to use this to their"}, {"start": 254.34, "end": 255.34, "text": " advantage."}, {"start": 255.34, "end": 259.7, "text": " So, you may ask why make these videos on deepfakes?"}, {"start": 259.7, "end": 263.46, "text": " Why spread this knowledge, especially now with the source codes?"}, {"start": 263.46, "end": 268.62, "text": " Well, I think step number one is to make sure to inform the public that these deepfakes"}, {"start": 268.62, "end": 274.18, "text": " can now be created quickly and inexpensively and they don't require a trained scientist"}, {"start": 274.18, "end": 275.18, "text": " anymore."}, {"start": 275.18, "end": 279.46, "text": " If this can be done, it is of utmost importance that we all know about it."}, {"start": 279.46, "end": 285.74, "text": " Then, beyond that, step number two, as a service to the public, I attend to EU and NATO"}, {"start": 285.74, "end": 291.74, "text": " conferences and inform key political and military decision makers about the existence and details"}, {"start": 291.74, "end": 296.86, "text": " of these techniques to make sure that they also know about these and using that knowledge"}, {"start": 296.86, "end": 299.26, "text": " they can make better decisions for us."}, {"start": 299.26, "end": 300.90000000000003, "text": " You see me doing it here."}, {"start": 300.90000000000003, "end": 305.58, "text": " And again, you see this technique in action here to demonstrate that it works really well"}, {"start": 305.58, "end": 307.94, "text": " for video footage in the world."}, {"start": 307.94, "end": 312.82, "text": " Note that these talks and consultations all happen free of charge and if they keep inviting"}, {"start": 312.82, "end": 317.34000000000003, "text": " me, I'll keep showing up to help with this in the future as a service to the public."}, {"start": 317.34, "end": 322.21999999999997, "text": " The cool thing is that later, over dinner, they tend to come back to me with a summary"}, {"start": 322.21999999999997, "end": 326.7, "text": " of their understanding of the situation and I highly appreciate the fact that they are"}, {"start": 326.7, "end": 329.82, "text": " open to what we scientists have to say."}, {"start": 329.82, "end": 333.26, "text": " And now, please enjoy the promised footage."}, {"start": 333.26, "end": 337.5, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fahir."}, {"start": 337.5, "end": 341.73999999999995, "text": " It is important for you to know that everybody can make deepfakes now."}, {"start": 341.74, "end": 348.5, "text": " You can turn your head around, mouth movements are looking great, and eye movements are also"}, {"start": 348.5, "end": 352.42, "text": " translated into the target footage."}, {"start": 352.42, "end": 357.34000000000003, "text": " And of course, as we always say, two more papers down the line and it will be even better"}, {"start": 357.34000000000003, "end": 359.94, "text": " and cheaper than this."}, {"start": 359.94, "end": 362.82, "text": " This episode has been supported by weights and biases."}, {"start": 362.82, "end": 367.66, "text": " Here, they show you how you can use sweeps, their tool to search through high-dimensional"}, {"start": 367.66, "end": 371.7, "text": " parameter spaces and find the best performing model."}, {"start": 371.7, "end": 376.3, "text": " Weight and biases provide tools to track your experiments in your deep learning projects."}, {"start": 376.3, "end": 381.18, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 381.18, "end": 387.86, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 387.86, "end": 392.98, "text": " And the best part is that if you are an academic or have an open source project, you can use"}, {"start": 392.98, "end": 394.41999999999996, "text": " their tools for free."}, {"start": 394.41999999999996, "end": 397.06, "text": " It really is as good as it gets."}, {"start": 397.06, "end": 402.06, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 402.06, "end": 405.5, "text": " description and you can get a free demo today."}, {"start": 405.5, "end": 409.98, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 409.98, "end": 411.38, "text": " better videos for you."}, {"start": 411.38, "end": 438.82, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=eTYcMB6Yhe8
Can Self-Driving Cars Learn Depth Perception? 🚘
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers The showcased instrumentation post is available here: https://app.wandb.ai/stacey/sfmlearner/reports/See-3D-from-Video%3A-Depth-Perception-for-Self-Driving-Cars--Vmlldzo2Nzg2Nw 📝 The paper "Unsupervised Learning of Depth and Ego-Motion from Video" is available here: https://people.eecs.berkeley.edu/~tinghuiz/projects/SfMLearner/  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #SelfDrivingCars
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ysorna Ifahir. When we, humans, look at an image or a piece of video footage, such as this one, we all understand that this is just a 2D projection of the world around us. So much so that if we have the time and patience, we could draw a depth map that describes the distance of each object from the camera. This information is highly useful because we can use it to create real-time defocus effects for virtual reality and computer games. Or even perform this can burns effect in 3D, or in other words, zoom and pan around in a photograph. But with a beautiful twist because in the meantime we can reveal the depth of the image. However, when we show the same images to a machine, all it sees is a bunch of numbers. Fortunately, with the ascendancy of neural network-based learning algorithms, we now have a chance to do this reasonably well. For instance, we discussed this depth perception neural network in an earlier episode, which was trained using a large number of input output pairs, where the inputs are a bunch of images, and the outputs are their corresponding depth maps for the neural network to learn from. The authors implemented this with a random scene generator, which creates a bunch of these crazy configurations with a lot of occlusions and computes via simulation the appropriate depth map for them. This is what we call supervised learning because we have all these input output pairs. The solutions are given in the training set to guide the training of the neural network. This is supervised learning, machine learning with crutches. We can also use this depth information to enhance the perception of self-driving cars, but this application is not like the previous two I just mentioned. It is much, much harder because in the earlier supervised learning example, we have trained a neural network in a simulation, and then we also use it later in a computer game, which is, of course, another simulation. We control all the variables and the environment here. However, self-driving cars need to be deployed in the real world. These cars also generate a lot of video footage with their sensors, which could be fed back to the neural networks as additional training data, if we had the depth maps for them, which, of course, unfortunately, we don't. And now, with this, we have arrived to the concept of unsupervised learning. Unsupervised learning is proper machine learning, where no crutches are allowed, we just unleash the algorithm on a bunch of data with no labels, and if we do it well, the neural network will learn something useful from it. It is very convenient because any video we have may be used as training data. That would be great, but we have a tiny problem, and that tiny problem is that this sounds impossible. Or it may have sounded impossible until this paper appeared. This work promises us no less than unsupervised depth learning from videos. Since this is unsupervised, it means that during training, all it sees is unlabeled videos from different viewpoints, and somehow figures out a way to create these depth maps from it. So, how is this even possible? Well, it is possible by adding just one ingenious idea. The idea is that since we don't have the labels, we can't teach the algorithm how to be right, but instead we can teach it to be consistent. That doesn't sound like much, does it? Well, it makes all the difference because if we ask the algorithm to be consistent, it will find out that a good way to be consistent is to be right. While we are looking at some results to make this clearer, let me add one more real-world example that demonstrates how cool this idea is. Imagine that you are a university professor overseeing an exam in mathematics and someone tells you that for one of the problems, most of the students give the same answer. If this is the case, there is a good chance that this was the right answer. It is not a hundred percent chance that this is the case, but if most of the students have the same answer, it is much more unlikely that they have all failed the same way. There are many different ways to fail, but there is only one way to succeed. Therefore, if there is consistency, often there is success. And this simple but powerful thought leads to far-eaching conclusions. Let's have a look at some more results. Woohoo! Now this is something. Let me explain why I am so excited for this. This is the input image and this is the perfect depth map that is concealed from our beloved algorithm and is there for us to be able to evaluate its performance. These are two previous works, both use crutches. The first was trained via supervised learning by showing it input output image pairs with depth maps and it does reasonably well while the other one gets even less supervision, the worst crutch if you will, and it came up with this. Now the unsupervised new technique was not given any crutches and came up with this. This is a very accurate version of the true depth maps. So what do you know? This neural network-based method looks at unlabeled videos and finds a way to create depth maps by not trying to be right, but trying to be consistent. This is one of those amazing papers where one simple, brilliant idea can change everything and make the impossible possible. What a time to be alive! What you see here is an instrumentation of this depth learning paper we have talked about. This was made by Wets and Biasis. I think organizing these experiments really showcases the usability of their system. Also, Wets and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you're an academic or have an open-source project, you can use their tools for free. It is really as good as it gets. Make sure to visit them through wnbe.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wets and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ysorna Ifahir."}, {"start": 4.48, "end": 9.200000000000001, "text": " When we, humans, look at an image or a piece of video footage, such as this one,"}, {"start": 9.200000000000001, "end": 14.16, "text": " we all understand that this is just a 2D projection of the world around us."}, {"start": 14.16, "end": 22.88, "text": " So much so that if we have the time and patience, we could draw a depth map that describes the distance of each object from the camera."}, {"start": 22.88, "end": 30.88, "text": " This information is highly useful because we can use it to create real-time defocus effects for virtual reality and computer games."}, {"start": 34.08, "end": 42.16, "text": " Or even perform this can burns effect in 3D, or in other words, zoom and pan around in a photograph."}, {"start": 42.16, "end": 48.32, "text": " But with a beautiful twist because in the meantime we can reveal the depth of the image."}, {"start": 48.32, "end": 54.08, "text": " However, when we show the same images to a machine, all it sees is a bunch of numbers."}, {"start": 54.08, "end": 61.44, "text": " Fortunately, with the ascendancy of neural network-based learning algorithms, we now have a chance to do this reasonably well."}, {"start": 61.44, "end": 69.92, "text": " For instance, we discussed this depth perception neural network in an earlier episode, which was trained using a large number of input output pairs,"}, {"start": 69.92, "end": 77.6, "text": " where the inputs are a bunch of images, and the outputs are their corresponding depth maps for the neural network to learn from."}, {"start": 77.6, "end": 89.36, "text": " The authors implemented this with a random scene generator, which creates a bunch of these crazy configurations with a lot of occlusions and computes via simulation the appropriate depth map for them."}, {"start": 89.36, "end": 95.03999999999999, "text": " This is what we call supervised learning because we have all these input output pairs."}, {"start": 95.03999999999999, "end": 99.84, "text": " The solutions are given in the training set to guide the training of the neural network."}, {"start": 99.84, "end": 104.0, "text": " This is supervised learning, machine learning with crutches."}, {"start": 104.0, "end": 113.76, "text": " We can also use this depth information to enhance the perception of self-driving cars, but this application is not like the previous two I just mentioned."}, {"start": 113.76, "end": 122.0, "text": " It is much, much harder because in the earlier supervised learning example, we have trained a neural network in a simulation,"}, {"start": 122.0, "end": 128.48, "text": " and then we also use it later in a computer game, which is, of course, another simulation."}, {"start": 128.48, "end": 132.08, "text": " We control all the variables and the environment here."}, {"start": 132.08, "end": 136.64000000000001, "text": " However, self-driving cars need to be deployed in the real world."}, {"start": 136.64000000000001, "end": 144.48000000000002, "text": " These cars also generate a lot of video footage with their sensors, which could be fed back to the neural networks as additional training data,"}, {"start": 144.48000000000002, "end": 150.56, "text": " if we had the depth maps for them, which, of course, unfortunately, we don't."}, {"start": 150.56, "end": 155.60000000000002, "text": " And now, with this, we have arrived to the concept of unsupervised learning."}, {"start": 155.6, "end": 164.72, "text": " Unsupervised learning is proper machine learning, where no crutches are allowed, we just unleash the algorithm on a bunch of data with no labels,"}, {"start": 164.72, "end": 169.44, "text": " and if we do it well, the neural network will learn something useful from it."}, {"start": 169.44, "end": 174.79999999999998, "text": " It is very convenient because any video we have may be used as training data."}, {"start": 174.79999999999998, "end": 182.48, "text": " That would be great, but we have a tiny problem, and that tiny problem is that this sounds impossible."}, {"start": 182.48, "end": 187.2, "text": " Or it may have sounded impossible until this paper appeared."}, {"start": 187.2, "end": 192.88, "text": " This work promises us no less than unsupervised depth learning from videos."}, {"start": 192.88, "end": 200.23999999999998, "text": " Since this is unsupervised, it means that during training, all it sees is unlabeled videos from different viewpoints,"}, {"start": 200.23999999999998, "end": 205.04, "text": " and somehow figures out a way to create these depth maps from it."}, {"start": 205.04, "end": 207.51999999999998, "text": " So, how is this even possible?"}, {"start": 207.51999999999998, "end": 212.07999999999998, "text": " Well, it is possible by adding just one ingenious idea."}, {"start": 212.08, "end": 217.36, "text": " The idea is that since we don't have the labels, we can't teach the algorithm how to be right,"}, {"start": 217.36, "end": 221.04000000000002, "text": " but instead we can teach it to be consistent."}, {"start": 221.04000000000002, "end": 223.60000000000002, "text": " That doesn't sound like much, does it?"}, {"start": 223.60000000000002, "end": 228.08, "text": " Well, it makes all the difference because if we ask the algorithm to be consistent,"}, {"start": 228.08, "end": 232.8, "text": " it will find out that a good way to be consistent is to be right."}, {"start": 232.8, "end": 235.60000000000002, "text": " While we are looking at some results to make this clearer,"}, {"start": 235.60000000000002, "end": 241.04000000000002, "text": " let me add one more real-world example that demonstrates how cool this idea is."}, {"start": 241.04, "end": 245.76, "text": " Imagine that you are a university professor overseeing an exam in mathematics"}, {"start": 245.76, "end": 251.76, "text": " and someone tells you that for one of the problems, most of the students give the same answer."}, {"start": 251.76, "end": 255.92, "text": " If this is the case, there is a good chance that this was the right answer."}, {"start": 255.92, "end": 258.96, "text": " It is not a hundred percent chance that this is the case,"}, {"start": 258.96, "end": 265.92, "text": " but if most of the students have the same answer, it is much more unlikely that they have all failed the same way."}, {"start": 265.92, "end": 270.08, "text": " There are many different ways to fail, but there is only one way to succeed."}, {"start": 270.08, "end": 274.32, "text": " Therefore, if there is consistency, often there is success."}, {"start": 274.32, "end": 278.8, "text": " And this simple but powerful thought leads to far-eaching conclusions."}, {"start": 279.68, "end": 281.2, "text": " Let's have a look at some more results."}, {"start": 282.64, "end": 285.52, "text": " Woohoo! Now this is something."}, {"start": 285.52, "end": 288.47999999999996, "text": " Let me explain why I am so excited for this."}, {"start": 288.47999999999996, "end": 294.79999999999995, "text": " This is the input image and this is the perfect depth map that is concealed from our beloved algorithm"}, {"start": 294.79999999999995, "end": 298.47999999999996, "text": " and is there for us to be able to evaluate its performance."}, {"start": 298.48, "end": 302.40000000000003, "text": " These are two previous works, both use crutches."}, {"start": 302.40000000000003, "end": 308.8, "text": " The first was trained via supervised learning by showing it input output image pairs with depth maps"}, {"start": 308.8, "end": 313.92, "text": " and it does reasonably well while the other one gets even less supervision,"}, {"start": 313.92, "end": 317.44, "text": " the worst crutch if you will, and it came up with this."}, {"start": 318.48, "end": 324.08000000000004, "text": " Now the unsupervised new technique was not given any crutches and came up with this."}, {"start": 324.08, "end": 332.47999999999996, "text": " This is a very accurate version of the true depth maps."}, {"start": 333.12, "end": 334.15999999999997, "text": " So what do you know?"}, {"start": 334.15999999999997, "end": 339.84, "text": " This neural network-based method looks at unlabeled videos and finds a way to create depth maps"}, {"start": 339.84, "end": 343.44, "text": " by not trying to be right, but trying to be consistent."}, {"start": 344.32, "end": 350.4, "text": " This is one of those amazing papers where one simple, brilliant idea can change everything"}, {"start": 350.4, "end": 354.32, "text": " and make the impossible possible. What a time to be alive!"}, {"start": 354.88, "end": 359.84, "text": " What you see here is an instrumentation of this depth learning paper we have talked about."}, {"start": 359.84, "end": 365.59999999999997, "text": " This was made by Wets and Biasis. I think organizing these experiments really showcases the"}, {"start": 365.59999999999997, "end": 371.35999999999996, "text": " usability of their system. Also, Wets and Biasis provides tools to track your experiments"}, {"start": 371.35999999999996, "end": 376.71999999999997, "text": " in your deep learning projects. Their system is designed to save you a ton of time and money"}, {"start": 376.72, "end": 382.96000000000004, "text": " and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research,"}, {"start": 382.96000000000004, "end": 389.44000000000005, "text": " GitHub and more. And the best part is that if you're an academic or have an open-source project,"}, {"start": 389.44000000000005, "end": 393.92, "text": " you can use their tools for free. It is really as good as it gets."}, {"start": 393.92, "end": 400.48, "text": " Make sure to visit them through wnbe.com slash papers or just click the link in the video description"}, {"start": 400.48, "end": 406.40000000000003, "text": " and you can get a free demo today. Our thanks to Wets and Biasis for their long-standing support"}, {"start": 406.4, "end": 411.44, "text": " and for helping us make better videos for you. Thanks for watching and for your generous support"}, {"start": 411.44, "end": 441.28, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=3Wppf_CNvD0
Google’s Chatbot: Almost Perfect 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Towards a Human-like Open-Domain Chatbot" is available here: https://arxiv.org/abs/2001.09977 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karo Ejorna Ifehir. When I was growing up, IQ tests were created by humans to test the intelligence of other humans. If someone told me just 10 years ago that algorithms will create IQ tests to be taken by other algorithms, I wouldn't have believed a word of it. Yet, just a year ago, scientists at DeepMind created a program that is able to generate a large amount of problems that test abstract reasoning capabilities. They are inspired by human IQ tests with all these questions about sizes, colors, and progressions. They wrote their own neural network to take these tests which performed remarkably well. How well exactly? In the presence of nasty distractor objects, it was able to find out the correct solution about 62% of the time and if we remove these distractors, which I will note that are good at misdirecting humans too, the AI was correct 78% of the time. Awesome. But today, we are capable of writing even more sophisticated learning algorithms that can even complete our sentences. Not so long ago, the OpenAI Lab published GPT2, a technique that they unleashed to read the internet and it learned our language by itself. A few episodes ago, we gave it a spin and I almost fell out of the chair when I saw that it could finish my sentences about fluid simulations in such a scholarly way that I think could easily fool a layperson. Have a look here and judge for yourself. This GPT2 technique was a neural network variant that was trained using one and a half billion parameters. At the risk of oversimplifying what that means, it roughly refers to the internal complexity of the networks or in other words, how many weights and connections are there. And now, the Google Brain team has released MINA, an open domain chatbot that uses 2.6 billion parameters and shows remarkable human-like properties. The chatbot part means a piece of software or a machine that we can talk to and the open domain part refers to the fact that we can try any topic, hotels, movies, the ocean, favorite movie characters or pretty much anything we can think of and expect a bot to do well. So how do we know that it's really good? Well, let's try to evaluate it in two different ways. First, let's try the super fun but less scientific way or in other words, what we are already doing, looking at chat logs. You see MINA writing on the left and the human being on the right and it not only answers questions sensibly and coherently but is even capable of cracking a joke. Of course, if you consider a pun to be a joke, that is. You see a selection of topics here where the user talks with MINA about movies and about expresses the desire to see the Grand Budapest Hotel which is indeed a very human-like quality. It can also try to come up with a proper definition of philosophy. And now, since we are scholars, we would also like to measure how human like this is in a more scientific manner as well. Now is a good time to hold onto your papers because this is measured by the sensibleness and specificity average score from now on SSA in short in which humans are here, previous chatbots are down there and MINA is right there close by which means that it is easy to be confused for a real human. That already sounds like science fiction, however, let's be a little nosy here and also ask how do we know if this SSA is any good in predicting what is human like and what isn't? Excellent question. In measuring human likeness for these chatbots, plugging in the SSA, again, the sensibleness and specificity average, we see that they correlate really strongly which means that the two seem to measure very similar things and in this case SSA can indeed be used as a proxy for human likeness. The coefficient of determination is 0.96. This is a several times stronger correlation than we can measure between the intelligence and the grades of a student which is already a great correlation. This is a remarkable result. Now what we get out of this is that the SSA is much easier and precise to measure than human likeness and is hence used throughout the paper. So chatbots say, what are all these things useful for? Well do you remember Google's technique that would automatically use an AI to talk to your colors and screen your calls? Or even make calls on your behalf? When connected to a text to speech synthesizer, something that Google already does amazingly well, Mina could really come alive in our daily lives soon. What a time to be alive. This episode has been supported by Lambda. If you're a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com, slash papers and sign up for one of their amazing GPU instances today. Thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Ejorna Ifehir."}, {"start": 4.8, "end": 10.56, "text": " When I was growing up, IQ tests were created by humans to test the intelligence of other"}, {"start": 10.56, "end": 11.64, "text": " humans."}, {"start": 11.64, "end": 17.76, "text": " If someone told me just 10 years ago that algorithms will create IQ tests to be taken"}, {"start": 17.76, "end": 22.0, "text": " by other algorithms, I wouldn't have believed a word of it."}, {"start": 22.0, "end": 28.48, "text": " Yet, just a year ago, scientists at DeepMind created a program that is able to generate"}, {"start": 28.48, "end": 33.24, "text": " a large amount of problems that test abstract reasoning capabilities."}, {"start": 33.24, "end": 39.4, "text": " They are inspired by human IQ tests with all these questions about sizes, colors, and"}, {"start": 39.4, "end": 40.4, "text": " progressions."}, {"start": 40.4, "end": 45.92, "text": " They wrote their own neural network to take these tests which performed remarkably well."}, {"start": 45.92, "end": 47.480000000000004, "text": " How well exactly?"}, {"start": 47.480000000000004, "end": 52.84, "text": " In the presence of nasty distractor objects, it was able to find out the correct solution"}, {"start": 52.84, "end": 59.56, "text": " about 62% of the time and if we remove these distractors, which I will note that are"}, {"start": 59.56, "end": 66.16, "text": " good at misdirecting humans too, the AI was correct 78% of the time."}, {"start": 66.16, "end": 67.32000000000001, "text": " Awesome."}, {"start": 67.32000000000001, "end": 71.92, "text": " But today, we are capable of writing even more sophisticated learning algorithms that"}, {"start": 71.92, "end": 74.72, "text": " can even complete our sentences."}, {"start": 74.72, "end": 80.84, "text": " Not so long ago, the OpenAI Lab published GPT2, a technique that they unleashed to read"}, {"start": 80.84, "end": 84.84, "text": " the internet and it learned our language by itself."}, {"start": 84.84, "end": 89.96000000000001, "text": " A few episodes ago, we gave it a spin and I almost fell out of the chair when I saw that"}, {"start": 89.96000000000001, "end": 96.16, "text": " it could finish my sentences about fluid simulations in such a scholarly way that I think could"}, {"start": 96.16, "end": 98.52000000000001, "text": " easily fool a layperson."}, {"start": 98.52000000000001, "end": 101.32000000000001, "text": " Have a look here and judge for yourself."}, {"start": 101.32000000000001, "end": 106.56, "text": " This GPT2 technique was a neural network variant that was trained using one and a half"}, {"start": 106.56, "end": 108.2, "text": " billion parameters."}, {"start": 108.2, "end": 113.24000000000001, "text": " At the risk of oversimplifying what that means, it roughly refers to the internal complexity"}, {"start": 113.24000000000001, "end": 118.92, "text": " of the networks or in other words, how many weights and connections are there."}, {"start": 118.92, "end": 125.84, "text": " And now, the Google Brain team has released MINA, an open domain chatbot that uses 2.6"}, {"start": 125.84, "end": 130.76, "text": " billion parameters and shows remarkable human-like properties."}, {"start": 130.76, "end": 136.36, "text": " The chatbot part means a piece of software or a machine that we can talk to and the open"}, {"start": 136.36, "end": 143.56, "text": " domain part refers to the fact that we can try any topic, hotels, movies, the ocean,"}, {"start": 143.56, "end": 148.92000000000002, "text": " favorite movie characters or pretty much anything we can think of and expect a bot to"}, {"start": 148.92000000000002, "end": 150.12, "text": " do well."}, {"start": 150.12, "end": 152.48000000000002, "text": " So how do we know that it's really good?"}, {"start": 152.48000000000002, "end": 156.32000000000002, "text": " Well, let's try to evaluate it in two different ways."}, {"start": 156.32000000000002, "end": 162.44000000000003, "text": " First, let's try the super fun but less scientific way or in other words, what we are already"}, {"start": 162.44000000000003, "end": 165.12, "text": " doing, looking at chat logs."}, {"start": 165.12, "end": 170.24, "text": " You see MINA writing on the left and the human being on the right and it not only answers"}, {"start": 170.24, "end": 176.88, "text": " questions sensibly and coherently but is even capable of cracking a joke."}, {"start": 176.88, "end": 181.84, "text": " Of course, if you consider a pun to be a joke, that is."}, {"start": 181.84, "end": 187.08, "text": " You see a selection of topics here where the user talks with MINA about movies and about"}, {"start": 187.08, "end": 192.92000000000002, "text": " expresses the desire to see the Grand Budapest Hotel which is indeed a very human-like"}, {"start": 192.92000000000002, "end": 194.8, "text": " quality."}, {"start": 194.8, "end": 199.32000000000002, "text": " It can also try to come up with a proper definition of philosophy."}, {"start": 199.32000000000002, "end": 204.56, "text": " And now, since we are scholars, we would also like to measure how human like this is in"}, {"start": 204.56, "end": 206.92000000000002, "text": " a more scientific manner as well."}, {"start": 206.92000000000002, "end": 212.20000000000002, "text": " Now is a good time to hold onto your papers because this is measured by the sensibleness"}, {"start": 212.20000000000002, "end": 219.28, "text": " and specificity average score from now on SSA in short in which humans are here, previous"}, {"start": 219.28, "end": 226.12, "text": " chatbots are down there and MINA is right there close by which means that it is easy to"}, {"start": 226.12, "end": 228.72, "text": " be confused for a real human."}, {"start": 228.72, "end": 234.2, "text": " That already sounds like science fiction, however, let's be a little nosy here and also"}, {"start": 234.2, "end": 242.48, "text": " ask how do we know if this SSA is any good in predicting what is human like and what isn't?"}, {"start": 242.48, "end": 243.96, "text": " Excellent question."}, {"start": 243.96, "end": 249.44, "text": " In measuring human likeness for these chatbots, plugging in the SSA, again, the sensibleness"}, {"start": 249.44, "end": 254.76000000000002, "text": " and specificity average, we see that they correlate really strongly which means that the two"}, {"start": 254.76000000000002, "end": 261.64, "text": " seem to measure very similar things and in this case SSA can indeed be used as a proxy"}, {"start": 261.64, "end": 263.48, "text": " for human likeness."}, {"start": 263.48, "end": 267.36, "text": " The coefficient of determination is 0.96."}, {"start": 267.36, "end": 272.76, "text": " This is a several times stronger correlation than we can measure between the intelligence"}, {"start": 272.76, "end": 277.28, "text": " and the grades of a student which is already a great correlation."}, {"start": 277.28, "end": 279.64, "text": " This is a remarkable result."}, {"start": 279.64, "end": 285.03999999999996, "text": " Now what we get out of this is that the SSA is much easier and precise to measure than"}, {"start": 285.03999999999996, "end": 289.24, "text": " human likeness and is hence used throughout the paper."}, {"start": 289.24, "end": 293.28, "text": " So chatbots say, what are all these things useful for?"}, {"start": 293.28, "end": 298.52, "text": " Well do you remember Google's technique that would automatically use an AI to talk to"}, {"start": 298.52, "end": 301.48, "text": " your colors and screen your calls?"}, {"start": 301.48, "end": 304.36, "text": " Or even make calls on your behalf?"}, {"start": 304.36, "end": 309.48, "text": " When connected to a text to speech synthesizer, something that Google already does amazingly"}, {"start": 309.48, "end": 314.24, "text": " well, Mina could really come alive in our daily lives soon."}, {"start": 314.24, "end": 316.16, "text": " What a time to be alive."}, {"start": 316.16, "end": 318.6, "text": " This episode has been supported by Lambda."}, {"start": 318.6, "end": 323.8, "text": " If you're a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 323.8, "end": 325.8, "text": " check out Lambda GPU Cloud."}, {"start": 325.8, "end": 331.12, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 331.12, "end": 334.4, "text": " that they are offering GPU Cloud services as well."}, {"start": 334.4, "end": 341.32, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 341.32, "end": 346.2, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 346.2, "end": 351.88, "text": " And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 351.88, "end": 354.16, "text": " AWS and Azure."}, {"start": 354.16, "end": 359.48, "text": " Make sure to go to lambdaleps.com, slash papers and sign up for one of their amazing GPU"}, {"start": 359.48, "end": 360.88, "text": " instances today."}, {"start": 360.88, "end": 364.32, "text": " Thanks to Lambda for helping us make better videos for you."}, {"start": 364.32, "end": 393.88, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=bXzauli1TyU
This Neural Network Regenerates…Kind Of 🦎
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers The shown blog post is available here: https://www.wandb.com/articles/visualize-xgboost-in-one-line 📝 The paper "Growing Neural Cellular Automata" is available here: https://distill.pub/2020/growing-ca/ Game of Life source: https://copy.sh/life/  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fahir. Today, we are going to play with a cellular automaton. You can imagine this automaton as small games where we have a bunch of cells and a set of simple rules that describe when a cell should be full and when it should be empty. These rules typically depend on the state of the neighboring cells. For instance, perhaps the most well-known form of this cellular automaton is John Horton Conway's Game of Life, which simulates a tiny world where each cell represents a little life form. The rules, again, depend on the neighbors of this cell. If there are too many neighbors, they will die due to overpopulation. If too few, they will die due to underpopulation. And if they have just the right amount of neighbors, they will thrive and reproduce. So why is this so interesting? Well, this cellular automaton shows us that a small set of simple rules can give rise to remarkably complex life forms such as gliders, spaceships, and even John Fawne Neumann's universal constructor or, in other words, self-replicating machines. I hope you think that's quite something, and in this paper today, we are going to take this concept further. Way further. This cellular automaton is programmed to evolve a single cell to grow into a prescribed kind of life form. Apart from that, there are many other key differences from other works, and we will highlight two of them today. One, the cell state is a little different because it can either be empty, growing, or mature, and even more importantly, two, the mathematical formulation of the problem is written in a way that is quite similar to how we train a deep neural network to accomplish something. This is absolutely amazing. Why is that? Well, because it gives rise to a highly useful feature, namely that we can teach it to grow these prescribed organisms. But wait, over time, some of them seem to decay, some of them can stop growing, and some of them will be responsible for your nightmares, so from this point on, proceed with care. In the next experiment, the authors describe an additional step in which it can recover from these undesirable states. And now, hold on to your papers because this leads to one of the major points of this paper. If it can recover from undesirable states, can it perhaps regenerate when damaged? Well, here you will see all kinds of damage, and then this happens. Wow! The best part is that this thing wasn't even trained to be able to perform this kind of regeneration. The objective for training was that it should be able to perform its task of growing and maintaining shape, and it turns out some sort of regeneration is included in that. It can also handle rotations as well, which will give rise to a lot of fun, and as note to the moment ago, some nightmare-ish experiments. And note that this is a paper in the distilled journal, which not only means that it is excellent, but also interactive, so you can run many of these experiments yourself right in your browser. If Alexander Mordvinsev, the name of the first author, Ringsabel, he worked on Google's deep dreams approximately five years ago. How far we have come since? My goodness! Loving these crazy, non-traditional research papers, and I'm looking forward to seeing more of these. This episode has been supported by weights and biases. Here, they show you how you can visualize the training process for your boosted trees with XG boost using their tool. If you have a closer look, you'll see that all you need is one line of code. weights and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you're an academic or have an open-source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through www.kamslashpapers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fahir."}, {"start": 4.64, "end": 8.5, "text": " Today, we are going to play with a cellular automaton."}, {"start": 8.5, "end": 14.06, "text": " You can imagine this automaton as small games where we have a bunch of cells and a set"}, {"start": 14.06, "end": 19.98, "text": " of simple rules that describe when a cell should be full and when it should be empty."}, {"start": 19.98, "end": 24.02, "text": " These rules typically depend on the state of the neighboring cells."}, {"start": 24.02, "end": 29.42, "text": " For instance, perhaps the most well-known form of this cellular automaton is John Horton"}, {"start": 29.42, "end": 35.74, "text": " Conway's Game of Life, which simulates a tiny world where each cell represents a little"}, {"start": 35.74, "end": 37.14, "text": " life form."}, {"start": 37.14, "end": 40.6, "text": " The rules, again, depend on the neighbors of this cell."}, {"start": 40.6, "end": 44.82, "text": " If there are too many neighbors, they will die due to overpopulation."}, {"start": 44.82, "end": 48.94, "text": " If too few, they will die due to underpopulation."}, {"start": 48.94, "end": 54.38, "text": " And if they have just the right amount of neighbors, they will thrive and reproduce."}, {"start": 54.38, "end": 56.7, "text": " So why is this so interesting?"}, {"start": 56.7, "end": 63.18000000000001, "text": " Well, this cellular automaton shows us that a small set of simple rules can give rise to"}, {"start": 63.18000000000001, "end": 69.94, "text": " remarkably complex life forms such as gliders, spaceships, and even John Fawne Neumann's"}, {"start": 69.94, "end": 75.46000000000001, "text": " universal constructor or, in other words, self-replicating machines."}, {"start": 75.46000000000001, "end": 80.06, "text": " I hope you think that's quite something, and in this paper today, we are going to take"}, {"start": 80.06, "end": 82.26, "text": " this concept further."}, {"start": 82.26, "end": 83.58, "text": " Way further."}, {"start": 83.58, "end": 89.3, "text": " This cellular automaton is programmed to evolve a single cell to grow into a prescribed"}, {"start": 89.3, "end": 91.14, "text": " kind of life form."}, {"start": 91.14, "end": 95.46, "text": " Apart from that, there are many other key differences from other works, and we will"}, {"start": 95.46, "end": 97.9, "text": " highlight two of them today."}, {"start": 97.9, "end": 105.53999999999999, "text": " One, the cell state is a little different because it can either be empty, growing, or mature,"}, {"start": 105.53999999999999, "end": 111.86, "text": " and even more importantly, two, the mathematical formulation of the problem is written in a way"}, {"start": 111.86, "end": 117.18, "text": " that is quite similar to how we train a deep neural network to accomplish something."}, {"start": 117.18, "end": 119.62, "text": " This is absolutely amazing."}, {"start": 119.62, "end": 120.62, "text": " Why is that?"}, {"start": 120.62, "end": 125.94, "text": " Well, because it gives rise to a highly useful feature, namely that we can teach it to"}, {"start": 125.94, "end": 128.9, "text": " grow these prescribed organisms."}, {"start": 128.9, "end": 137.86, "text": " But wait, over time, some of them seem to decay, some of them can stop growing, and some"}, {"start": 137.86, "end": 145.3, "text": " of them will be responsible for your nightmares, so from this point on, proceed with care."}, {"start": 145.3, "end": 150.54000000000002, "text": " In the next experiment, the authors describe an additional step in which it can recover"}, {"start": 150.54000000000002, "end": 152.9, "text": " from these undesirable states."}, {"start": 152.9, "end": 157.34, "text": " And now, hold on to your papers because this leads to one of the major points of this"}, {"start": 157.34, "end": 158.34, "text": " paper."}, {"start": 158.34, "end": 164.14000000000001, "text": " If it can recover from undesirable states, can it perhaps regenerate when damaged?"}, {"start": 164.14, "end": 170.42, "text": " Well, here you will see all kinds of damage, and then this happens."}, {"start": 170.42, "end": 172.29999999999998, "text": " Wow!"}, {"start": 172.29999999999998, "end": 177.14, "text": " The best part is that this thing wasn't even trained to be able to perform this kind of"}, {"start": 177.14, "end": 178.33999999999997, "text": " regeneration."}, {"start": 178.33999999999997, "end": 183.5, "text": " The objective for training was that it should be able to perform its task of growing and"}, {"start": 183.5, "end": 190.17999999999998, "text": " maintaining shape, and it turns out some sort of regeneration is included in that."}, {"start": 190.18, "end": 195.42000000000002, "text": " It can also handle rotations as well, which will give rise to a lot of fun, and as note"}, {"start": 195.42000000000002, "end": 199.46, "text": " to the moment ago, some nightmare-ish experiments."}, {"start": 199.46, "end": 204.78, "text": " And note that this is a paper in the distilled journal, which not only means that it is excellent,"}, {"start": 204.78, "end": 211.42000000000002, "text": " but also interactive, so you can run many of these experiments yourself right in your browser."}, {"start": 211.42000000000002, "end": 216.94, "text": " If Alexander Mordvinsev, the name of the first author, Ringsabel, he worked on Google's"}, {"start": 216.94, "end": 220.54, "text": " deep dreams approximately five years ago."}, {"start": 220.54, "end": 222.38, "text": " How far we have come since?"}, {"start": 222.38, "end": 223.38, "text": " My goodness!"}, {"start": 223.38, "end": 228.06, "text": " Loving these crazy, non-traditional research papers, and I'm looking forward to seeing"}, {"start": 228.06, "end": 229.5, "text": " more of these."}, {"start": 229.5, "end": 232.57999999999998, "text": " This episode has been supported by weights and biases."}, {"start": 232.57999999999998, "end": 237.26, "text": " Here, they show you how you can visualize the training process for your boosted trees"}, {"start": 237.26, "end": 239.78, "text": " with XG boost using their tool."}, {"start": 239.78, "end": 244.57999999999998, "text": " If you have a closer look, you'll see that all you need is one line of code."}, {"start": 244.58, "end": 249.14000000000001, "text": " weights and biases provides tools to track your experiments in your deep learning projects."}, {"start": 249.14000000000001, "end": 254.02, "text": " Their system is designed to save you a ton of time and money, and it is actively used"}, {"start": 254.02, "end": 260.86, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 260.86, "end": 265.74, "text": " And the best part is that if you're an academic or have an open-source project, you can use"}, {"start": 265.74, "end": 267.26, "text": " their tools for free."}, {"start": 267.26, "end": 269.98, "text": " It really is as good as it gets."}, {"start": 269.98, "end": 275.98, "text": " Make sure to visit them through www.kamslashpapers or just click the link in the video description"}, {"start": 275.98, "end": 278.54, "text": " and you can get a free demo today."}, {"start": 278.54, "end": 283.34000000000003, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 283.34000000000003, "end": 284.66, "text": " better videos for you."}, {"start": 284.66, "end": 314.62, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-IbNmc2mTz4
This Neural Network Learned The Style of Famous Illustrators
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers The shown blog post is available here: https://www.wandb.com/articles/better-models-faster-with-weights-biases 📝 The paper "#GANILLA: Generative Adversarial Networks for Image to Illustration Translation" is available here: https://github.com/giddyyupp/ganilla 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Thumbnail background image credit: https://pixabay.com/images/id-3651473/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. In the last few years, we have seen a bunch of new AI-based techniques that were specialized in generating new and novel images. This is mainly done through learning-based techniques, typically a generative adversarial network, again, in short, which is an architecture where a generator neural network creates new images and passes it to a discriminator network which learns to distinguish real photos from these fake, generated images. The two networks learn and improve together and generate better and better images over time. What you see here is a set of results created with a technique by the name Psychogun. This could even translate daytime into nighttime images, re-imagine a picture of a horse as if it were a zebra and more. We can also use it for style transfer, a problem where we have two input images, one for content and one for style, and as you see here, the output would be a nice mixture of the two. However, if we use Psychogun for this kind of style transfer, we'll get something like this. The goal was to learn the style of a select set of famous illustrators of children's books by providing an input image with their work. So, what do you think about the results? While the style is indeed completely different from the source, but the algorithm seems a little too heavy handed and did not leave the content itself intact. Let's have a look at another result with a previous technique. Maybe this will do better. This is Duogan which refers to a paper by the name unsupervised Duolarning for image to image translation. This uses two GANs to perform image translation, where one GAN learns to translate, for instance, one day to night, while the other learns the opposite, night to day translation. This among other advantages makes things very efficient, but as you see here, in these cases, it preserves the content of the image, but perhaps a little too much because the style itself does not appear too prominently in the output images. So, Psychogun is good at transferring style, but a little less so for content and Duogun is good at preserving the content, but sometimes adds too little of the style to the image. And now, hold on to your papers because this new technique by the name GANILA offers us these results. The content is intact, checkmark, and the style goes through really well, checkmark. It preserves the content and transfers the style at the same time. Excellent! One of the many key reasons as to why this happens is the usage of skip connections, which help preserve the content information as we travel deeper into the neural network. So finally, let's put our money where our mouth is and take a bunch of illustrators, marvel at their unique style, and then apply it to photographs and see how the algorithm stacks up against other previous works. Wow! I love these beautiful results! These comparisons really show how good Duogunila technique is at preserving content. And note that these are distinct artistic styles that are really difficult to reproduce even for humans. It is truly amazing that we can perform such a thing algorithmically. Don't forget that the first style transfer paper appeared approximately 3 to 3.5 years ago, and now we have come a long, long way. The pace of progress in machine learning research is truly stunning. While we are looking at some more amazing results, this time around, only from Gunila, I will note that the authors also made a user study with 48 people who favored this against previous techniques. And perhaps leaving the best for last, it can even draw in the style of Hayao Miyazaki. I bet there are a bunch of Miyazaki fans watching, so let me know in the comments what you think about these results. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to easily iterate on models by visualizing and comparing experiments in real time. Also, weights and biases provide tools to track your experiments in your deep learning projects. Each system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 9.68, "text": " In the last few years, we have seen a bunch of new AI-based techniques that were specialized"}, {"start": 9.68, "end": 12.6, "text": " in generating new and novel images."}, {"start": 12.6, "end": 17.32, "text": " This is mainly done through learning-based techniques, typically a generative adversarial"}, {"start": 17.32, "end": 23.0, "text": " network, again, in short, which is an architecture where a generator neural network creates new"}, {"start": 23.0, "end": 29.04, "text": " images and passes it to a discriminator network which learns to distinguish real photos from"}, {"start": 29.04, "end": 31.72, "text": " these fake, generated images."}, {"start": 31.72, "end": 38.04, "text": " The two networks learn and improve together and generate better and better images over time."}, {"start": 38.04, "end": 43.32, "text": " What you see here is a set of results created with a technique by the name Psychogun."}, {"start": 43.32, "end": 49.08, "text": " This could even translate daytime into nighttime images, re-imagine a picture of a horse as"}, {"start": 49.08, "end": 54.6, "text": " if it were a zebra and more."}, {"start": 54.6, "end": 60.72, "text": " We can also use it for style transfer, a problem where we have two input images, one for content"}, {"start": 60.72, "end": 66.72, "text": " and one for style, and as you see here, the output would be a nice mixture of the two."}, {"start": 66.72, "end": 72.08, "text": " However, if we use Psychogun for this kind of style transfer, we'll get something like"}, {"start": 72.08, "end": 73.08, "text": " this."}, {"start": 73.08, "end": 78.28, "text": " The goal was to learn the style of a select set of famous illustrators of children's books"}, {"start": 78.28, "end": 81.28, "text": " by providing an input image with their work."}, {"start": 81.28, "end": 84.36, "text": " So, what do you think about the results?"}, {"start": 84.36, "end": 89.6, "text": " While the style is indeed completely different from the source, but the algorithm seems a little"}, {"start": 89.6, "end": 94.12, "text": " too heavy handed and did not leave the content itself intact."}, {"start": 94.12, "end": 97.4, "text": " Let's have a look at another result with a previous technique."}, {"start": 97.4, "end": 99.16, "text": " Maybe this will do better."}, {"start": 99.16, "end": 104.68, "text": " This is Duogan which refers to a paper by the name unsupervised Duolarning for image"}, {"start": 104.68, "end": 106.56, "text": " to image translation."}, {"start": 106.56, "end": 112.84, "text": " This uses two GANs to perform image translation, where one GAN learns to translate, for instance,"}, {"start": 112.84, "end": 119.16, "text": " one day to night, while the other learns the opposite, night to day translation."}, {"start": 119.16, "end": 125.2, "text": " This among other advantages makes things very efficient, but as you see here, in these"}, {"start": 125.2, "end": 130.6, "text": " cases, it preserves the content of the image, but perhaps a little too much because the"}, {"start": 130.6, "end": 135.24, "text": " style itself does not appear too prominently in the output images."}, {"start": 135.24, "end": 142.44, "text": " So, Psychogun is good at transferring style, but a little less so for content and Duogun"}, {"start": 142.44, "end": 148.8, "text": " is good at preserving the content, but sometimes adds too little of the style to the image."}, {"start": 148.8, "end": 153.96, "text": " And now, hold on to your papers because this new technique by the name GANILA offers us"}, {"start": 153.96, "end": 155.68, "text": " these results."}, {"start": 155.68, "end": 162.44, "text": " The content is intact, checkmark, and the style goes through really well, checkmark."}, {"start": 162.44, "end": 167.68, "text": " It preserves the content and transfers the style at the same time."}, {"start": 167.68, "end": 168.76, "text": " Excellent!"}, {"start": 168.76, "end": 173.56, "text": " One of the many key reasons as to why this happens is the usage of skip connections, which"}, {"start": 173.56, "end": 179.32, "text": " help preserve the content information as we travel deeper into the neural network."}, {"start": 179.32, "end": 185.04, "text": " So finally, let's put our money where our mouth is and take a bunch of illustrators, marvel"}, {"start": 185.04, "end": 190.48, "text": " at their unique style, and then apply it to photographs and see how the algorithm"}, {"start": 190.48, "end": 194.39999999999998, "text": " stacks up against other previous works."}, {"start": 194.39999999999998, "end": 196.56, "text": " Wow!"}, {"start": 196.56, "end": 199.08, "text": " I love these beautiful results!"}, {"start": 199.08, "end": 204.96, "text": " These comparisons really show how good Duogunila technique is at preserving content."}, {"start": 204.96, "end": 209.96, "text": " And note that these are distinct artistic styles that are really difficult to reproduce"}, {"start": 209.96, "end": 211.72, "text": " even for humans."}, {"start": 211.72, "end": 215.64000000000001, "text": " It is truly amazing that we can perform such a thing algorithmically."}, {"start": 215.64000000000001, "end": 221.84, "text": " Don't forget that the first style transfer paper appeared approximately 3 to 3.5 years"}, {"start": 221.84, "end": 225.84, "text": " ago, and now we have come a long, long way."}, {"start": 225.84, "end": 230.44, "text": " The pace of progress in machine learning research is truly stunning."}, {"start": 230.44, "end": 235.44, "text": " While we are looking at some more amazing results, this time around, only from Gunila, I"}, {"start": 235.44, "end": 241.28, "text": " will note that the authors also made a user study with 48 people who favored this against"}, {"start": 241.28, "end": 246.72, "text": " previous techniques."}, {"start": 246.72, "end": 252.6, "text": " And perhaps leaving the best for last, it can even draw in the style of Hayao Miyazaki."}, {"start": 252.6, "end": 256.92, "text": " I bet there are a bunch of Miyazaki fans watching, so let me know in the comments what you"}, {"start": 256.92, "end": 259.48, "text": " think about these results."}, {"start": 259.48, "end": 261.4, "text": " What a time to be alive!"}, {"start": 261.4, "end": 264.52, "text": " This episode has been supported by weights and biases."}, {"start": 264.52, "end": 270.88, "text": " In this post, they show you how to easily iterate on models by visualizing and comparing experiments"}, {"start": 270.88, "end": 272.2, "text": " in real time."}, {"start": 272.2, "end": 276.64, "text": " Also, weights and biases provide tools to track your experiments in your deep learning"}, {"start": 276.64, "end": 277.64, "text": " projects."}, {"start": 277.64, "end": 282.8, "text": " Each system is designed to save you a ton of time and money, and it is actively used"}, {"start": 282.8, "end": 289.8, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 289.8, "end": 295.0, "text": " And the best part is that if you are an academic or have an open source project, you can use"}, {"start": 295.0, "end": 296.71999999999997, "text": " their tools for free."}, {"start": 296.71999999999997, "end": 299.32, "text": " It really is as good as it gets."}, {"start": 299.32, "end": 304.68, "text": " Make sure to visit them through wnb.com slash papers, or just click the link in the video"}, {"start": 304.68, "end": 308.12, "text": " description, and you can get a free demo today."}, {"start": 308.12, "end": 312.84000000000003, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 312.84000000000003, "end": 314.24, "text": " better videos for you."}, {"start": 314.24, "end": 343.8, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=HcB3ImpYeQU
Deformable Simulations…Running In Real Time! 🐙
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers The shown blog post is available here: https://www.wandb.com/articles/visualize-lightgbm-performance-in-one-line-of-code 📝 The paper "A Scalable Galerkin Multigrid Method for Real-time Simulation of Deformable Objects" is available here: http://tiantianliu.cn/papers/xian2019multigrid/xian2019multigrid.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. With the power of modern computer graphics and machine learning techniques, we are now able to teach virtual humanoids to walk, set, manipulate objects, and we can even make up new creature types and teach them new tricks if we are patient enough that is. But even with all this knowledge, we are not done yet. Are we? Should we just shut down all the research facilities because there is nothing else to do? Well, if you have spent any amount of time watching two-minute papers, you know that the answer is, of course not. There is still so much to do, I don't even know where to start. For instance, let's consider the case of deformable simulations. Not so long ago, we talked about you and Ming-Hu's amazing paper, with which we can engage in the favorite pastime of a computer graphics researcher, which is, of course, destroying virtual objects in a spectacular manner. It can also create remarkably accurate yellow simulations where we can even choose our physical parameters. Here you see how we can drop in blocks of different densities into the yellow, and as a result, they sink in deeper and deeper. Amazing. However, note that this is not for real-time applications and computer games, because the execution time is not measured in frames per second, but in seconds per frame. If we are looking for somewhat coarse results, but in real-time, we have covered a paper approximately 300 episodes ago, which performed something that is called a reduced deformable simulation. Leave a comment if you were already a fellow scholar back then. The technique could be trained on a number of different representative cases, which, in computer graphics research, is often referred to as pre-computation, which means that we have to do a ton of work before starting a task, but only once, and then all our subsequent simulations can be sped up. Kind of like a student studying before an exam, so when the exam itself happens, the student, in the ideal case, will know exactly what to do. Imagine trying to learn the whole subject during the exam. Note that this training in this technique is not the same kind of training we are used to see with neural networks, and its generalization capabilities were limited, meaning that if we strayed too far from the training examples, the algorithm did not work so reliably. And now, hold on to your papers because this new method runs on your graphics card, and hence can perform these deformable simulations at close to 40 frames per second. And in the following examples in a moment, you will see something even better. A killer advantage of this method is that this is also scalable. This means that a resolution of the object geometry can be changed around. Here, the upper left is a coarse version of the object, where the lower right is the most refined version of it. Of course, the number of frames we can put out per second depends a great deal on the resolution of this geometry, and if you have a look, this looks very close to the one below it, but it is still more than 3 to 6 times faster than real time. Wow! And whenever we are dealing with collisions, lots of amazing details appear. Just look at this. Let's look at a little more formal measurement of the scalability of this method. Note that this is a log-log plot since the number of tetrahedra used for the geometry, and the execution time spans many orders of magnitude. In other words, we can see how it works from the coarsest piece of geometry to the most detailed models we can throw at it. If we look at something like this, we are hoping that the lines are not too steep, which is the case for both the memory and execution timings. So, finally, real time deformable simulations, here we come. What a time to be alive. This episode has been supported by weights and biases. Here, they show you how to make it to the top of Kaggle leaderboards by using their tool to find the best model faster than everyone else. Also, weights and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that if you are an academic or have an open-source project, you can use their tools for free. It is really as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 8.16, "text": " With the power of modern computer graphics and machine learning techniques,"}, {"start": 8.16, "end": 11.84, "text": " we are now able to teach virtual humanoids to walk,"}, {"start": 11.84, "end": 18.400000000000002, "text": " set, manipulate objects, and we can even make up new creature types and teach them new tricks"}, {"start": 19.12, "end": 25.2, "text": " if we are patient enough that is. But even with all this knowledge, we are not done yet."}, {"start": 25.2, "end": 30.48, "text": " Are we? Should we just shut down all the research facilities because there is nothing else to do?"}, {"start": 31.2, "end": 35.2, "text": " Well, if you have spent any amount of time watching two-minute papers,"}, {"start": 35.2, "end": 40.08, "text": " you know that the answer is, of course not. There is still so much to do,"}, {"start": 40.08, "end": 45.84, "text": " I don't even know where to start. For instance, let's consider the case of deformable simulations."}, {"start": 46.8, "end": 50.8, "text": " Not so long ago, we talked about you and Ming-Hu's amazing paper,"}, {"start": 50.8, "end": 55.839999999999996, "text": " with which we can engage in the favorite pastime of a computer graphics researcher,"}, {"start": 55.839999999999996, "end": 60.559999999999995, "text": " which is, of course, destroying virtual objects in a spectacular manner."}, {"start": 61.04, "end": 66.56, "text": " It can also create remarkably accurate yellow simulations where we can even choose our physical"}, {"start": 66.56, "end": 72.4, "text": " parameters. Here you see how we can drop in blocks of different densities into the yellow,"}, {"start": 72.4, "end": 75.67999999999999, "text": " and as a result, they sink in deeper and deeper."}, {"start": 75.68, "end": 82.48, "text": " Amazing. However, note that this is not for real-time applications and computer games,"}, {"start": 82.48, "end": 88.48, "text": " because the execution time is not measured in frames per second, but in seconds per frame."}, {"start": 88.48, "end": 95.04, "text": " If we are looking for somewhat coarse results, but in real-time, we have covered a paper"}, {"start": 95.04, "end": 101.2, "text": " approximately 300 episodes ago, which performed something that is called a reduced deformable"}, {"start": 101.2, "end": 106.72, "text": " simulation. Leave a comment if you were already a fellow scholar back then. The technique could"}, {"start": 106.72, "end": 111.84, "text": " be trained on a number of different representative cases, which, in computer graphics research,"}, {"start": 111.84, "end": 117.12, "text": " is often referred to as pre-computation, which means that we have to do a ton of work before"}, {"start": 117.12, "end": 123.2, "text": " starting a task, but only once, and then all our subsequent simulations can be sped up."}, {"start": 123.84, "end": 129.52, "text": " Kind of like a student studying before an exam, so when the exam itself happens,"}, {"start": 129.52, "end": 133.20000000000002, "text": " the student, in the ideal case, will know exactly what to do."}, {"start": 133.76000000000002, "end": 136.8, "text": " Imagine trying to learn the whole subject during the exam."}, {"start": 137.76000000000002, "end": 142.8, "text": " Note that this training in this technique is not the same kind of training we are used to see"}, {"start": 142.8, "end": 147.28, "text": " with neural networks, and its generalization capabilities were limited,"}, {"start": 147.28, "end": 153.28, "text": " meaning that if we strayed too far from the training examples, the algorithm did not work so reliably."}, {"start": 153.28, "end": 159.44, "text": " And now, hold on to your papers because this new method runs on your graphics card,"}, {"start": 159.44, "end": 165.84, "text": " and hence can perform these deformable simulations at close to 40 frames per second."}, {"start": 165.84, "end": 170.0, "text": " And in the following examples in a moment, you will see something even better."}, {"start": 170.72, "end": 176.88, "text": " A killer advantage of this method is that this is also scalable. This means that a resolution"}, {"start": 176.88, "end": 183.2, "text": " of the object geometry can be changed around. Here, the upper left is a coarse version of the object,"}, {"start": 183.2, "end": 186.0, "text": " where the lower right is the most refined version of it."}, {"start": 186.79999999999998, "end": 192.48, "text": " Of course, the number of frames we can put out per second depends a great deal on the resolution"}, {"start": 192.48, "end": 198.96, "text": " of this geometry, and if you have a look, this looks very close to the one below it, but it is still"}, {"start": 198.96, "end": 206.64, "text": " more than 3 to 6 times faster than real time. Wow! And whenever we are dealing with collisions,"}, {"start": 206.64, "end": 208.88, "text": " lots of amazing details appear."}, {"start": 212.23999999999998, "end": 218.72, "text": " Just look at this. Let's look at a little more formal measurement of the scalability of this method."}, {"start": 218.72, "end": 223.92, "text": " Note that this is a log-log plot since the number of tetrahedra used for the geometry,"}, {"start": 223.92, "end": 230.32, "text": " and the execution time spans many orders of magnitude. In other words, we can see how it works"}, {"start": 230.32, "end": 235.11999999999998, "text": " from the coarsest piece of geometry to the most detailed models we can throw at it."}, {"start": 235.12, "end": 239.52, "text": " If we look at something like this, we are hoping that the lines are not too steep,"}, {"start": 239.52, "end": 243.12, "text": " which is the case for both the memory and execution timings."}, {"start": 243.92000000000002, "end": 250.16, "text": " So, finally, real time deformable simulations, here we come. What a time to be alive."}, {"start": 250.8, "end": 256.08, "text": " This episode has been supported by weights and biases. Here, they show you how to make it to the"}, {"start": 256.08, "end": 262.4, "text": " top of Kaggle leaderboards by using their tool to find the best model faster than everyone else."}, {"start": 262.4, "end": 267.59999999999997, "text": " Also, weights and biases provides tools to track your experiments in your deep learning projects."}, {"start": 267.59999999999997, "end": 273.35999999999996, "text": " Their system is designed to save you a ton of time and money, and it is actively used in projects"}, {"start": 273.35999999999996, "end": 279.28, "text": " at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 279.28, "end": 284.0, "text": " And the best part is that if you are an academic or have an open-source project,"}, {"start": 284.0, "end": 288.47999999999996, "text": " you can use their tools for free. It is really as good as it gets."}, {"start": 288.48, "end": 294.88, "text": " Make sure to visit them through wnb.com slash papers, or just click the link in the video description,"}, {"start": 294.88, "end": 297.36, "text": " and you can get a free demo today."}, {"start": 297.36, "end": 300.8, "text": " Our thanks to weights and biases for their long-standing support,"}, {"start": 300.8, "end": 303.52000000000004, "text": " and for helping us make better videos for you."}, {"start": 303.52, "end": 318.4, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=yX84nGi-V7E
Transferring Real Honey Into A Simulation 🍯
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "Video-Guided Real-to-Virtual Parameter Transfer for Viscous Fluids" is available here: http://gamma.cs.unc.edu/ParameterTransfer/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karo Jona Ifehir. It's time for some fluid simulations again. Writing fluid simulations is one of the most fun things we can do within computer graphics because we can create a virtual scene, add the laws of physics for fluid motion, and create photorealistic footage with an absolutely incredible amount of detail and realism. Note that we can do this ourselves so much so that for this scene I run the fluid and light simulation myself here at the two minute paper studio and on consumer hardware. However, despite this amazing looking footage, we are not nearly done yet. There is still so much to explore. For instance, a big challenge these days is trying to simulate fluid solid interactions. This means that the sand is allowed to have an effect on the fluid, but at the same time, as the fluid sloshes around, it also moves the sand particles within. This is what we refer to as two-way coupling. We also note that there are different kinds of two-way coupling and only the more advanced ones can correctly simulate how real honey supports the deeper and there is barely any movement. This may be about the only place on the internet where we are super happy that nothing at all is happening. However, many of you astute fellow scholars immediately ask, okay, but what kind of honey are we talking about? We can buy tens if not hundreds of different kinds of honey at the market. If we don't know what kind of honey we are using, how do we know if this simulation is too viscous or not viscous enough? Great question. Just to make sure we don't get lost, viscosity means the amount of resistance against deformation, therefore as we go up, you can witness this kind of resistance increasing. And now, hold on to your papers because this new technique comes from the same authors as the previous one with the honey deeper and enables us to import real-world honey into our simulation. That sounds like science fiction. Importing real-world materials into a computer simulation, how is that even possible? Well, with this solution, all we need to do is point a consumer smartphone camera at the phenomenon and record it. The proposed technique does all the heavy lifting by first extracting the silhouette of the footage and then creating a simulation that tries to reproduce this behavior. The closer it is, the better. However, at first, of course, we don't know the exact parameters that would result in this, however, now we have an objective we can work towards. The goal is to rerun this simulation with different parameters sets in a way to minimize the difference between the simulation and reality. This is not just working by trial and error, but through a technique that we refer to as mathematical optimization. As you see, later, the technique was able to successfully identify the appropriate viscosity parameter. And when evaluating these results, note that this work does not deal with how things look. For instance, whether the honey has the proper color or translucency is not the point here. What we are trying to reproduce is not how it looks, but how it moves. It works on a variety of different fluid types. I have slowed down some of these videos to make sure we can appreciate together how amazingly good these estimations are. And we are not even done yet. If we wish to, we can even set up a similar scene as the real world one with our simulation as a proxy for the real honey or caramel flow. After that, we can perform anything we want with this virtual piece of fluid, even including putting it into novel scenarios like this scene, which would otherwise be very difficult to control and quite wasteful, or even creating the perfect honey-depri-experiment. Look at how perfect the symmetry is there down below. Yum! Normally, in a real world environment, we cannot pour the honey and apply forces this accurately, but in a simulation, we can do anything we want. And now, we can also import the exact kind of materials for my real world repertoire. If you can buy it, you can simulate it. What a time to be alive! This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Unlike entry-level hosting services, Linode gives you full back-end access to your server, which is a step-up to powerful, fast, fully configurable cloud computing. Linode also has one click apps that streamline your ability to deploy websites, personal VPNs, game servers, and more. If you need something as small as a personal online portfolio, Linode has your back, and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. If only I had access to a tool like this while I was working on my last few papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers, or click the link in the video description and give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Jona Ifehir."}, {"start": 4.4, "end": 7.2, "text": " It's time for some fluid simulations again."}, {"start": 7.2, "end": 12.16, "text": " Writing fluid simulations is one of the most fun things we can do within computer graphics"}, {"start": 12.16, "end": 17.44, "text": " because we can create a virtual scene, add the laws of physics for fluid motion,"}, {"start": 17.44, "end": 23.84, "text": " and create photorealistic footage with an absolutely incredible amount of detail and realism."}, {"start": 23.84, "end": 29.68, "text": " Note that we can do this ourselves so much so that for this scene I run the fluid and light"}, {"start": 29.68, "end": 35.12, "text": " simulation myself here at the two minute paper studio and on consumer hardware."}, {"start": 35.12, "end": 39.76, "text": " However, despite this amazing looking footage, we are not nearly done yet."}, {"start": 39.76, "end": 42.16, "text": " There is still so much to explore."}, {"start": 42.16, "end": 47.120000000000005, "text": " For instance, a big challenge these days is trying to simulate fluid solid interactions."}, {"start": 47.120000000000005, "end": 52.16, "text": " This means that the sand is allowed to have an effect on the fluid, but at the same time,"}, {"start": 52.16, "end": 56.8, "text": " as the fluid sloshes around, it also moves the sand particles within."}, {"start": 56.8, "end": 61.839999999999996, "text": " This is what we refer to as two-way coupling. We also note that there are different kinds of"}, {"start": 61.839999999999996, "end": 67.52, "text": " two-way coupling and only the more advanced ones can correctly simulate how real honey supports"}, {"start": 67.52, "end": 73.67999999999999, "text": " the deeper and there is barely any movement. This may be about the only place on the internet"}, {"start": 73.67999999999999, "end": 76.56, "text": " where we are super happy that nothing at all is happening."}, {"start": 77.28, "end": 83.6, "text": " However, many of you astute fellow scholars immediately ask, okay, but what kind of honey"}, {"start": 83.6, "end": 89.36, "text": " are we talking about? We can buy tens if not hundreds of different kinds of honey at the market."}, {"start": 90.0, "end": 95.11999999999999, "text": " If we don't know what kind of honey we are using, how do we know if this simulation is too"}, {"start": 95.11999999999999, "end": 100.88, "text": " viscous or not viscous enough? Great question. Just to make sure we don't get lost,"}, {"start": 100.88, "end": 106.63999999999999, "text": " viscosity means the amount of resistance against deformation, therefore as we go up,"}, {"start": 106.63999999999999, "end": 109.28, "text": " you can witness this kind of resistance increasing."}, {"start": 109.28, "end": 115.44, "text": " And now, hold on to your papers because this new technique comes from the same authors as the"}, {"start": 115.44, "end": 121.36, "text": " previous one with the honey deeper and enables us to import real-world honey into our simulation."}, {"start": 122.16, "end": 127.76, "text": " That sounds like science fiction. Importing real-world materials into a computer simulation,"}, {"start": 128.4, "end": 135.52, "text": " how is that even possible? Well, with this solution, all we need to do is point a consumer smartphone"}, {"start": 135.52, "end": 142.08, "text": " camera at the phenomenon and record it. The proposed technique does all the heavy lifting by first"}, {"start": 142.08, "end": 148.16000000000003, "text": " extracting the silhouette of the footage and then creating a simulation that tries to reproduce"}, {"start": 148.16000000000003, "end": 155.12, "text": " this behavior. The closer it is, the better. However, at first, of course, we don't know the exact"}, {"start": 155.12, "end": 160.56, "text": " parameters that would result in this, however, now we have an objective we can work towards."}, {"start": 160.56, "end": 166.48, "text": " The goal is to rerun this simulation with different parameters sets in a way to minimize the"}, {"start": 166.48, "end": 172.96, "text": " difference between the simulation and reality. This is not just working by trial and error,"}, {"start": 172.96, "end": 179.12, "text": " but through a technique that we refer to as mathematical optimization. As you see, later,"}, {"start": 179.12, "end": 184.16, "text": " the technique was able to successfully identify the appropriate viscosity parameter."}, {"start": 184.16, "end": 190.24, "text": " And when evaluating these results, note that this work does not deal with how things look."}, {"start": 190.24, "end": 196.32, "text": " For instance, whether the honey has the proper color or translucency is not the point here."}, {"start": 196.32, "end": 202.88, "text": " What we are trying to reproduce is not how it looks, but how it moves. It works on a variety of"}, {"start": 202.88, "end": 209.28, "text": " different fluid types. I have slowed down some of these videos to make sure we can appreciate together"}, {"start": 209.28, "end": 215.84, "text": " how amazingly good these estimations are. And we are not even done yet. If we wish to,"}, {"start": 215.84, "end": 221.28, "text": " we can even set up a similar scene as the real world one with our simulation as a proxy"}, {"start": 221.28, "end": 227.28, "text": " for the real honey or caramel flow. After that, we can perform anything we want with this virtual"}, {"start": 227.28, "end": 233.68, "text": " piece of fluid, even including putting it into novel scenarios like this scene, which would otherwise"}, {"start": 233.68, "end": 240.88, "text": " be very difficult to control and quite wasteful, or even creating the perfect honey-depri-experiment."}, {"start": 241.84, "end": 244.4, "text": " Look at how perfect the symmetry is there down below."}, {"start": 245.92000000000002, "end": 251.84, "text": " Yum! Normally, in a real world environment, we cannot pour the honey and apply forces"}, {"start": 251.84, "end": 258.48, "text": " this accurately, but in a simulation, we can do anything we want. And now, we can also import"}, {"start": 258.48, "end": 265.04, "text": " the exact kind of materials for my real world repertoire. If you can buy it, you can simulate it."}, {"start": 265.04, "end": 271.12, "text": " What a time to be alive! This episode has been supported by Linode. Linode is the world's largest"}, {"start": 271.12, "end": 276.56, "text": " independent cloud computing provider. Unlike entry-level hosting services, Linode gives you"}, {"start": 276.56, "end": 282.96000000000004, "text": " full back-end access to your server, which is a step-up to powerful, fast, fully configurable cloud"}, {"start": 282.96, "end": 288.56, "text": " computing. Linode also has one click apps that streamline your ability to deploy websites,"}, {"start": 288.56, "end": 296.0, "text": " personal VPNs, game servers, and more. If you need something as small as a personal online portfolio,"}, {"start": 296.0, "end": 301.59999999999997, "text": " Linode has your back, and if you need to manage tons of clients' websites and reliably serve them"}, {"start": 301.59999999999997, "end": 308.4, "text": " to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances"}, {"start": 308.4, "end": 315.2, "text": " featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer"}, {"start": 315.2, "end": 320.4, "text": " graphics projects. If only I had access to a tool like this while I was working on my last few"}, {"start": 320.4, "end": 327.28, "text": " papers. To receive $20 in credit on your new Linode account, visit linode.com slash papers,"}, {"start": 327.28, "end": 332.23999999999995, "text": " or click the link in the video description and give it a try today. Our thanks to Linode for"}, {"start": 332.23999999999995, "end": 337.2, "text": " supporting the series and helping us make better videos for you. Thanks for watching and for your"}, {"start": 337.2, "end": 347.2, "text": " generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=mjl4NEMG0JE
Can We Detect Neural Image Generators?
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers Their instrumentation of this paper: https://app.wandb.ai/lavanyashukla/cnndetection/reports/Detecting-CNN-Generated-Images--Vmlldzo2MTU1Mw 📝 The paper "CNN-generated images are surprisingly easy to spot...for now" is available here: https://peterwang512.github.io/CNNDetection/ Our Discord server is now available here and you are all invited! https://discordapp.com/invite/hbcTJu2 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepFake #DeepFakes
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zona Ifehir. Today, we have an abundance of neural network-based image generation techniques. Every image that you see here and throughout this video is generated by one of these learning based methods. This can offer high fidelity synthesis and not only that, but we can even exert artistic control over the outputs. We can truly do so much with this. And if you're wondering, there is a reason why we will be talking about an exact set of techniques and you will see that in a moment. So the first one is a very capable technique by the name CycleGAN. This was great at image translation or in other words, transforming apples into oranges, zebras into horses and more. It was called CycleGAN because it introduced a Cycle consistency loss function. This means that if we convert a summer image to a winter image and then back to a summer image, we should get the same input image back. If our learning system obeys to this principle, the output quality of the translation is going to be significantly better. Later, a technique by the name BigGAN appeared which was able to create reasonably high quality images and not only that, but it also gave us a little artistic control over the outputs. After that, CycleGAN and even its second version appeared which, among many other crazy good features, opened up the possibility to lock in several aspects of these images. For instance, age, pose, some facial features and more. And then we could mix them with other images to our liking while retaining these locked in aspects. And of course, deep-fake creation provides fertile grounds for research works so much so that at this point, it seems to be a subfield of its own where the rate of progress is just stunning. Now that we can generate arbitrarily many beautiful images with these learning algorithms, they will inevitably appear in many corners of the internet, so an important new question arises, can we detect if an image was made by these methods? This new paper argues that the answer is a resounding yes. You see a bunch of synthetic images above and real images below here, and if you look carefully for the labels, you'll see many names that ring a bell to our scholarly minds. CycleGAN, big gain, star gain, nice. And now you know that this is exactly why we briefly went through what these techniques do at the start of the video. So all of these can be detected by this new method. And now hold on to your papers because I kind of expected that, but what I didn't expect is that this detector was trained on only one of these techniques and leaning on that knowledge it was able to catch all the others. Now that's incredible. This means that there are foundational elements that bind together all of these techniques. Our season fellow scholars know that this similarity is none other than the fact that they are all built on convolutional neural networks. They are vastly different, but they use very similar building blocks. Imagine the convolutional layers as Lego pieces and think of the techniques themselves to be the objects that we build using them. We can build anything, but what binds these all together is that they are all but a collection of Lego pieces. So this detector was only trained on real images and synthetic ones created by the progain technique and you see with the blue bars that the detection ratio is quite close to perfect for a number of techniques saved for these two. The AP label means average precision. If you look at the paper in the description, you will get a lot more insights as to how robust it is against compression artifacts, a little frequency analysis of the different synthesis techniques and more. Let's send a huge thank you to the authors of the paper who also provide a source code and training data for this technique. For now, we can all breathe a sigh of relief that there are proper detection tools that we can train ourselves at home. In fact, you will see such an example in a second. What a time to be alive. Also, good news. We now have an unofficial Discord server where all of you fellow scholars are welcome to discuss ideas and learn together in a kind and respectful environment. Look, some connections and discussions are already being made. Thank you so much for our volunteering fellow scholars for making this happen. The link is available in the video description. It is completely free. And if you have joined, make sure to leave a short introduction. Meanwhile, what you see here is an instrumentation of this exact paper we have talked about, which was made by Wates and Biasis. Wates and Biasis provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that if you are an academic or have an open source project, you can use their tools for free. It is really as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wates and Biasis for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zona Ifehir."}, {"start": 4.64, "end": 9.24, "text": " Today, we have an abundance of neural network-based image generation techniques."}, {"start": 9.24, "end": 14.08, "text": " Every image that you see here and throughout this video is generated by one of these learning"}, {"start": 14.08, "end": 15.52, "text": " based methods."}, {"start": 15.52, "end": 21.240000000000002, "text": " This can offer high fidelity synthesis and not only that, but we can even exert artistic"}, {"start": 21.240000000000002, "end": 23.32, "text": " control over the outputs."}, {"start": 23.32, "end": 25.76, "text": " We can truly do so much with this."}, {"start": 25.76, "end": 30.080000000000002, "text": " And if you're wondering, there is a reason why we will be talking about an exact set of"}, {"start": 30.080000000000002, "end": 33.56, "text": " techniques and you will see that in a moment."}, {"start": 33.56, "end": 38.36, "text": " So the first one is a very capable technique by the name CycleGAN."}, {"start": 38.36, "end": 44.52, "text": " This was great at image translation or in other words, transforming apples into oranges,"}, {"start": 44.52, "end": 47.6, "text": " zebras into horses and more."}, {"start": 47.6, "end": 53.24, "text": " It was called CycleGAN because it introduced a Cycle consistency loss function."}, {"start": 53.24, "end": 58.84, "text": " This means that if we convert a summer image to a winter image and then back to a summer"}, {"start": 58.84, "end": 62.36, "text": " image, we should get the same input image back."}, {"start": 62.36, "end": 67.64, "text": " If our learning system obeys to this principle, the output quality of the translation is going"}, {"start": 67.64, "end": 69.84, "text": " to be significantly better."}, {"start": 69.84, "end": 75.44, "text": " Later, a technique by the name BigGAN appeared which was able to create reasonably high quality"}, {"start": 75.44, "end": 82.64, "text": " images and not only that, but it also gave us a little artistic control over the outputs."}, {"start": 82.64, "end": 88.48, "text": " After that, CycleGAN and even its second version appeared which, among many other crazy"}, {"start": 88.48, "end": 94.16, "text": " good features, opened up the possibility to lock in several aspects of these images."}, {"start": 94.16, "end": 99.03999999999999, "text": " For instance, age, pose, some facial features and more."}, {"start": 99.03999999999999, "end": 104.44, "text": " And then we could mix them with other images to our liking while retaining these locked"}, {"start": 104.44, "end": 105.76, "text": " in aspects."}, {"start": 105.76, "end": 110.84, "text": " And of course, deep-fake creation provides fertile grounds for research works so much so that"}, {"start": 110.84, "end": 116.16, "text": " at this point, it seems to be a subfield of its own where the rate of progress is just"}, {"start": 116.16, "end": 117.16, "text": " stunning."}, {"start": 117.16, "end": 122.24000000000001, "text": " Now that we can generate arbitrarily many beautiful images with these learning algorithms,"}, {"start": 122.24000000000001, "end": 127.84, "text": " they will inevitably appear in many corners of the internet, so an important new question"}, {"start": 127.84, "end": 133.04, "text": " arises, can we detect if an image was made by these methods?"}, {"start": 133.04, "end": 136.8, "text": " This new paper argues that the answer is a resounding yes."}, {"start": 136.8, "end": 141.84, "text": " You see a bunch of synthetic images above and real images below here, and if you look"}, {"start": 141.84, "end": 147.52, "text": " carefully for the labels, you'll see many names that ring a bell to our scholarly minds."}, {"start": 147.52, "end": 152.20000000000002, "text": " CycleGAN, big gain, star gain, nice."}, {"start": 152.20000000000002, "end": 156.60000000000002, "text": " And now you know that this is exactly why we briefly went through what these techniques"}, {"start": 156.60000000000002, "end": 158.76000000000002, "text": " do at the start of the video."}, {"start": 158.76000000000002, "end": 162.60000000000002, "text": " So all of these can be detected by this new method."}, {"start": 162.6, "end": 168.72, "text": " And now hold on to your papers because I kind of expected that, but what I didn't expect"}, {"start": 168.72, "end": 173.72, "text": " is that this detector was trained on only one of these techniques and leaning on that"}, {"start": 173.72, "end": 177.64, "text": " knowledge it was able to catch all the others."}, {"start": 177.64, "end": 179.44, "text": " Now that's incredible."}, {"start": 179.44, "end": 184.64, "text": " This means that there are foundational elements that bind together all of these techniques."}, {"start": 184.64, "end": 189.68, "text": " Our season fellow scholars know that this similarity is none other than the fact that"}, {"start": 189.68, "end": 193.04000000000002, "text": " they are all built on convolutional neural networks."}, {"start": 193.04000000000002, "end": 197.64000000000001, "text": " They are vastly different, but they use very similar building blocks."}, {"start": 197.64000000000001, "end": 202.44, "text": " Imagine the convolutional layers as Lego pieces and think of the techniques themselves to"}, {"start": 202.44, "end": 205.28, "text": " be the objects that we build using them."}, {"start": 205.28, "end": 210.76000000000002, "text": " We can build anything, but what binds these all together is that they are all but a collection"}, {"start": 210.76000000000002, "end": 212.60000000000002, "text": " of Lego pieces."}, {"start": 212.60000000000002, "end": 218.24, "text": " So this detector was only trained on real images and synthetic ones created by the progain"}, {"start": 218.24, "end": 224.08, "text": " technique and you see with the blue bars that the detection ratio is quite close to perfect"}, {"start": 224.08, "end": 227.64000000000001, "text": " for a number of techniques saved for these two."}, {"start": 227.64000000000001, "end": 230.60000000000002, "text": " The AP label means average precision."}, {"start": 230.60000000000002, "end": 234.64000000000001, "text": " If you look at the paper in the description, you will get a lot more insights as to how"}, {"start": 234.64000000000001, "end": 239.64000000000001, "text": " robust it is against compression artifacts, a little frequency analysis of the different"}, {"start": 239.64000000000001, "end": 242.24, "text": " synthesis techniques and more."}, {"start": 242.24, "end": 247.08, "text": " Let's send a huge thank you to the authors of the paper who also provide a source code"}, {"start": 247.08, "end": 249.4, "text": " and training data for this technique."}, {"start": 249.4, "end": 254.08, "text": " For now, we can all breathe a sigh of relief that there are proper detection tools that"}, {"start": 254.08, "end": 256.32, "text": " we can train ourselves at home."}, {"start": 256.32, "end": 259.76, "text": " In fact, you will see such an example in a second."}, {"start": 259.76, "end": 261.52000000000004, "text": " What a time to be alive."}, {"start": 261.52000000000004, "end": 263.12, "text": " Also, good news."}, {"start": 263.12, "end": 267.52000000000004, "text": " We now have an unofficial Discord server where all of you fellow scholars are welcome"}, {"start": 267.52000000000004, "end": 272.56, "text": " to discuss ideas and learn together in a kind and respectful environment."}, {"start": 272.56, "end": 276.72, "text": " Look, some connections and discussions are already being made."}, {"start": 276.72, "end": 280.6, "text": " Thank you so much for our volunteering fellow scholars for making this happen."}, {"start": 280.6, "end": 282.8, "text": " The link is available in the video description."}, {"start": 282.8, "end": 284.44000000000005, "text": " It is completely free."}, {"start": 284.44000000000005, "end": 287.96000000000004, "text": " And if you have joined, make sure to leave a short introduction."}, {"start": 287.96000000000004, "end": 293.6, "text": " Meanwhile, what you see here is an instrumentation of this exact paper we have talked about,"}, {"start": 293.6, "end": 296.20000000000005, "text": " which was made by Wates and Biasis."}, {"start": 296.20000000000005, "end": 300.92, "text": " Wates and Biasis provides tools to track your experiments in your deep learning projects."}, {"start": 300.92, "end": 305.6, "text": " Their system is designed to save you a ton of time and money and it is actively used in"}, {"start": 305.6, "end": 312.36, "text": " projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 312.36, "end": 317.20000000000005, "text": " And the best part is that if you are an academic or have an open source project, you can use"}, {"start": 317.20000000000005, "end": 318.72, "text": " their tools for free."}, {"start": 318.72, "end": 321.24, "text": " It is really as good as it gets."}, {"start": 321.24, "end": 326.64000000000004, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 326.64000000000004, "end": 330.24, "text": " description and you can get a free demo today."}, {"start": 330.24, "end": 335.28000000000003, "text": " Our thanks to Wates and Biasis for their long-standing support and for helping us make better"}, {"start": 335.28, "end": 336.28, "text": " videos for you."}, {"start": 336.28, "end": 365.28, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=548sCh0mMRc
This Neural Network Creates 3D Objects From Your Photos
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Learning to Predict 3D Objects with an Interpolation-based Differentiable Renderer" is available here: https://nv-tlabs.github.io/DIB-R/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1232435/ Thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karoizzoa Naifahir. In computer graphics research, we spend most of our time dealing with images. An image is a bunch of pixels put onto a 2D plane, which is a tiny window into reality, but reality is inherently 3D. This is easy to understand for us because if we look at a flat image, we see the geometric structures that hit the pics. If we look at this image, we know that this is not a sticker, but a 3-dimensional fluid domain. If I would freeze an image and ask a human to imagine rotating around this fluid domain, that human would do a pretty good job at that. However, for a computer algorithm, it would be extremely difficult to extract the 3D structure out from this image. So can we use these shiny, new neural network-based learning algorithms to accomplish something like this? Well, have a look at this new technique that takes a 2D image as an input and tries to guess 3 things. The cool thing is that the geometry problem we talked about is just the first one. Beyond that, 2, it also guesses what the lighting configuration is that leads to an appearance like this, and 3, it also produces the texture map for an object as well. This would already be great, but wait, there's more. If we plug all this into a rendering program, we can also specify a camera position and this position can be different from the one that was used to take this input image. So what does that mean exactly? Well, it means that maybe it can not only reconstruct the geometry, light, and texture of the object, but even put this all together and make a photo of it from a novel viewpoint. Wow! Let's have a look at an example. There's a lot going on in this image, so let me try to explain how to read it. This image is the input photo, and the white silhouette image is called a mask which can either be given with the image or be approximated by already existing methods. This is the reconstructed image by this technique, and then this is a previous method from 2018 by the name Category Specific Mesh Reconstruction, CMR in short. And now, hold on to your papers because in the second row, you see this technique creating images of this bird from different novel viewpoints. How cool is that? Absolutely amazing. Since we can render this bird from any viewpoint, we can even create a turntable video of it. And all this from just one input photo. Let's have a look at another example. Here you see how it puts together the final car rendering in the first column from the individual elements like geometry, texture, and lighting. The other comparisons in the paper reveal that this technique is indeed a huge step up from previous works. Now all this sounds great, but what is all this used for? What are some example applications of this 3D object from 2D image thing? Well, techniques like this can be a great deal of help in enhancing the depth perception capabilities of robots, and of course, whenever we would like to build a virtual world, creating a 3D version of something we only have a picture of can get extremely laborious. This could help a great deal with that too. For this application, we could quickly get a starting point with some text re-information and get an artist to fill in the fine details. This might get addressed in a follow-up paper. And if you are worried about the slide discoloration around the big area of this bird, do not despair. As we always say, two more papers down the line and this will likely be improved significantly. What a time to be alive. This episode has been supported by Lambda. If you're a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com, slash papers, and sign up for one of their amazing GPU instances today. Thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karoizzoa Naifahir."}, {"start": 4.88, "end": 9.84, "text": " In computer graphics research, we spend most of our time dealing with images."}, {"start": 9.84, "end": 16.12, "text": " An image is a bunch of pixels put onto a 2D plane, which is a tiny window into reality,"}, {"start": 16.12, "end": 19.48, "text": " but reality is inherently 3D."}, {"start": 19.48, "end": 24.72, "text": " This is easy to understand for us because if we look at a flat image, we see the geometric"}, {"start": 24.72, "end": 26.76, "text": " structures that hit the pics."}, {"start": 26.76, "end": 31.360000000000003, "text": " If we look at this image, we know that this is not a sticker, but a 3-dimensional fluid"}, {"start": 31.360000000000003, "end": 32.36, "text": " domain."}, {"start": 32.36, "end": 38.24, "text": " If I would freeze an image and ask a human to imagine rotating around this fluid domain,"}, {"start": 38.24, "end": 41.08, "text": " that human would do a pretty good job at that."}, {"start": 41.08, "end": 46.36, "text": " However, for a computer algorithm, it would be extremely difficult to extract the 3D structure"}, {"start": 46.36, "end": 48.160000000000004, "text": " out from this image."}, {"start": 48.160000000000004, "end": 53.760000000000005, "text": " So can we use these shiny, new neural network-based learning algorithms to accomplish something"}, {"start": 53.760000000000005, "end": 54.760000000000005, "text": " like this?"}, {"start": 54.76, "end": 60.4, "text": " Well, have a look at this new technique that takes a 2D image as an input and tries to"}, {"start": 60.4, "end": 62.28, "text": " guess 3 things."}, {"start": 62.28, "end": 67.12, "text": " The cool thing is that the geometry problem we talked about is just the first one."}, {"start": 67.12, "end": 73.03999999999999, "text": " Beyond that, 2, it also guesses what the lighting configuration is that leads to an appearance"}, {"start": 73.03999999999999, "end": 78.8, "text": " like this, and 3, it also produces the texture map for an object as well."}, {"start": 78.8, "end": 82.64, "text": " This would already be great, but wait, there's more."}, {"start": 82.64, "end": 88.5, "text": " If we plug all this into a rendering program, we can also specify a camera position and"}, {"start": 88.5, "end": 94.4, "text": " this position can be different from the one that was used to take this input image."}, {"start": 94.4, "end": 96.6, "text": " So what does that mean exactly?"}, {"start": 96.6, "end": 102.68, "text": " Well, it means that maybe it can not only reconstruct the geometry, light, and texture"}, {"start": 102.68, "end": 109.64, "text": " of the object, but even put this all together and make a photo of it from a novel viewpoint."}, {"start": 109.64, "end": 110.64, "text": " Wow!"}, {"start": 110.64, "end": 113.12, "text": " Let's have a look at an example."}, {"start": 113.12, "end": 117.36, "text": " There's a lot going on in this image, so let me try to explain how to read it."}, {"start": 117.36, "end": 122.24, "text": " This image is the input photo, and the white silhouette image is called a mask which can"}, {"start": 122.24, "end": 127.96000000000001, "text": " either be given with the image or be approximated by already existing methods."}, {"start": 127.96000000000001, "end": 134.44, "text": " This is the reconstructed image by this technique, and then this is a previous method from 2018"}, {"start": 134.44, "end": 139.8, "text": " by the name Category Specific Mesh Reconstruction, CMR in short."}, {"start": 139.8, "end": 144.68, "text": " And now, hold on to your papers because in the second row, you see this technique creating"}, {"start": 144.68, "end": 149.24, "text": " images of this bird from different novel viewpoints."}, {"start": 149.24, "end": 151.32000000000002, "text": " How cool is that?"}, {"start": 151.32000000000002, "end": 153.0, "text": " Absolutely amazing."}, {"start": 153.0, "end": 159.0, "text": " Since we can render this bird from any viewpoint, we can even create a turntable video of it."}, {"start": 159.0, "end": 164.48000000000002, "text": " And all this from just one input photo."}, {"start": 164.48000000000002, "end": 166.48000000000002, "text": " Let's have a look at another example."}, {"start": 166.48, "end": 171.32, "text": " Here you see how it puts together the final car rendering in the first column from the"}, {"start": 171.32, "end": 177.39999999999998, "text": " individual elements like geometry, texture, and lighting."}, {"start": 177.39999999999998, "end": 182.32, "text": " The other comparisons in the paper reveal that this technique is indeed a huge step up"}, {"start": 182.32, "end": 184.0, "text": " from previous works."}, {"start": 184.0, "end": 188.0, "text": " Now all this sounds great, but what is all this used for?"}, {"start": 188.0, "end": 193.0, "text": " What are some example applications of this 3D object from 2D image thing?"}, {"start": 193.0, "end": 198.36, "text": " Well, techniques like this can be a great deal of help in enhancing the depth perception"}, {"start": 198.36, "end": 203.52, "text": " capabilities of robots, and of course, whenever we would like to build a virtual world,"}, {"start": 203.52, "end": 209.56, "text": " creating a 3D version of something we only have a picture of can get extremely laborious."}, {"start": 209.56, "end": 212.08, "text": " This could help a great deal with that too."}, {"start": 212.08, "end": 216.8, "text": " For this application, we could quickly get a starting point with some text re-information"}, {"start": 216.8, "end": 219.96, "text": " and get an artist to fill in the fine details."}, {"start": 219.96, "end": 222.68, "text": " This might get addressed in a follow-up paper."}, {"start": 222.68, "end": 228.84, "text": " And if you are worried about the slide discoloration around the big area of this bird, do not despair."}, {"start": 228.84, "end": 234.76000000000002, "text": " As we always say, two more papers down the line and this will likely be improved significantly."}, {"start": 234.76000000000002, "end": 236.52, "text": " What a time to be alive."}, {"start": 236.52, "end": 238.92000000000002, "text": " This episode has been supported by Lambda."}, {"start": 238.92000000000002, "end": 244.16, "text": " If you're a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 244.16, "end": 246.36, "text": " check out Lambda GPU Cloud."}, {"start": 246.36, "end": 251.24, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 251.24, "end": 254.24, "text": " that they are offering GPU Cloud services as well."}, {"start": 254.24, "end": 261.24, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 261.24, "end": 266.52, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 266.52, "end": 273.24, "text": " And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS"}, {"start": 273.24, "end": 274.24, "text": " and Azure."}, {"start": 274.24, "end": 279.68, "text": " Make sure to go to lambdaleps.com, slash papers, and sign up for one of their amazing GPU"}, {"start": 279.68, "end": 280.68, "text": " instances today."}, {"start": 280.68, "end": 284.36, "text": " Thanks to Lambda for helping us make better videos for you."}, {"start": 284.36, "end": 313.92, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=TbWQ4lMnLNw
The Story of Light! ☀️
📝 The paper "Unifying points, beams, and paths in volumetric light transport simulation" and its implementation are available here: - https://cs.dartmouth.edu/~wjarosz/publications/krivanek14upbp.html - http://www.smallupbp.com/ Eric Veach's thesis with Multiple Importance Sampling is available here: https://graphics.stanford.edu/papers/veach_thesis/ My Light Transport course at the TU Wien is available here: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi We are hiring! I recommend the topic "Lighting Simulation For Architectural Design": http://gcd.tuwien.ac.at/?page_id=2404 My educational light transport program and 1D MIS implementation is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/smallpaint/ Wojciech Jarosz's Beams paper is available here: https://cs.dartmouth.edu/~wjarosz/publications/jarosz11comprehensive.html Errata: Wojciech Jarosz is at the Dartmouth College, or simply Dartmouth (not the Dartmouth University). He also started at Disney Research in 2009, a little earlier than I noted. Apologies! ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karoizor Naifahir. Whenever we look at these amazing research papers on physical simulations, it is always a joy seeing people discussing them in the comments section. However, one thing that caught my attention is that some people comment about how things look and not on how things move in these papers. Which is fair enough, and to this end, I will devote this episode to talk about a few amazing techniques in light transport simulations. But first things first, when talking about physical simulations, we are talking about a technique that computes how things move. Then, we typically run a light simulation program that computes how things look. The two are completely independent, which means that it is possible that the physical behavior of bread breaking here is correct, but the bread itself does not look perfectly realistic. The second part depends on the quality of the light simulation and the materials used there. We can create such an image by simulating the path of millions and millions of light rays. And initially, this image will look noisy, and as we add more and more rays, this image will slowly clean up over time. If we don't have a well-optimized program, this can take from hours to days to compute. We can speed up this process by carefully choosing where to shoot these rays, and this is a technique that is called important sampling. But then, around 1993, an amazing paper appeared by the name Bi-directional Path Tracing that proposed that we don't just start building light paths from one direction, but two instead. One from the camera, and one from the light source, and then connect them. This significantly improved the efficiency of these light simulations, however, it opened up a new kind of worms. There are many different ways of connecting these paths, which leads to mathematical difficulties. For instance, we have to specify the probability of a light path forming, but what do we do if there are multiple ways of producing this light path? There will be multiple probabilities. What do we do with all this stuff? To address this, Eric Vich described a magical algorithm in his thesis, and thus, multiple important sampling was born. I can say without exaggeration that this is one of the most powerful techniques in all photorealistic rendering research. What multiple important sampling, or from now on, MIS in short does, is combine these multiple sampling techniques in a way that accentuates the strength of each of them. For instance, you can see the image created by one sampling technique here, and the image from a different one here. Both of them are quite noisy, but if we combine them with MIS, we get this instead in the same amount of time. A much smoother, less noisy image. In many cases, this can truly bring down the computation times from several hours to several minutes. Absolute witchcraft. Later, even more advanced techniques appeared to accelerate the speed of these light simulation programs. For instance, it is now not only possible to compute light transport between points in space, but between a point and a beam instead. You see the evolution of an image using this photom beam-based technique. This way, we can get rid of the point-based noise and get a much, much more appealing rendering process. The lead author of this beam paper is Vojta Kiyaros, who, three years later, ended up being the head of the rendering group at the Amazing Disney Research Lab. Around that time, he also hired me to work with him at Disney on a project I can't talk about, which was an incredible and life-changing experience, and I will be forever grateful for his kindness. By the way, he is now a professor at the Dartmouth University and just keeps pumping out one killer paper after another. So as you might have guessed, if it is possible to compute light transport between two points, the point and a beam, later it became possible to do this between two beams. None of these are for the faint of the heart, but it works really well. But there is a huge problem. These techniques work with different dimensionalities, or, in other words, they estimate the final result so differently that they cannot be combined with multiple importance sampling. That is, indeed a problem, because all of these have completely different strengths and weaknesses. And now, hold on to your papers because we have finally arrived to the main paper of this episode. It bears the name UPBP, which stands for unifying points, beams, and paths, and it formulates multiple importance sampling between all of these different kinds of light transport simulations. Basically, what we can do with this is throw every advanced simulation program we can think of together and out comes a super powerful version of them that combines all their strengths and nullifies nearly all of their weaknesses. It is absolutely unreal. Here, you see four completely different algorithms running, and as you can see, they are noisy and smooth at very different places. They are good at computing different kinds of light transport. And now, hold on to your papers because this final result with the UPBP technique is this. Wow! Light transport on steroids. While we look at some more results, I will note that in my opinion, this is one of the best papers ever written in light transport research. The crazy thing is that I hardly ever hear anybody talk about it. If any paper would deserve a bit more attention, so I hope this video will help with that. And I would like to dedicate this video to Jerozlav Krzywanek, the first author of this absolutely amazing paper who has tragically passed away a few months ago. In my memories, I think of him as the true king of multiple important sampling, and I hope that now you do too. Note that MIS is not limited to light transport algorithms. It is a general concept that can be used together with a mathematical technique called Monte Carlo integration, which is used pretty much everywhere from finding out what an electromagnetic field looks like to financial modeling and much, much more. If you have anything to do with Monte Carlo integration, please read Eric Vichy's thesis and this paper, and if you feel that it is a good fit, try to incorporate multiple important sampling into your system. You'll be glad you did. Also, we have recorded my lectures of a master-level course on light transport simulations at the Technical University of Vienna. In this course, we write such a light simulation program from scratch, and it is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. Additionally, I have implemented a small, one-dimensional example of MIS if you wish to pick it up and try it, that's also available in the video description. While talking about the Technical University of Vienna, we are hiring for a PhD and a postdoc position. The call here about lighting simulation for architectural design is advised by my PhD advisor, Mikhail Vima, who I highly recommend. Apply now if you feel qualified, the link is in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karoizor Naifahir."}, {"start": 5.0, "end": 10.48, "text": " Whenever we look at these amazing research papers on physical simulations, it is always"}, {"start": 10.48, "end": 14.32, "text": " a joy seeing people discussing them in the comments section."}, {"start": 14.32, "end": 19.32, "text": " However, one thing that caught my attention is that some people comment about how things"}, {"start": 19.32, "end": 23.400000000000002, "text": " look and not on how things move in these papers."}, {"start": 23.400000000000002, "end": 28.32, "text": " Which is fair enough, and to this end, I will devote this episode to talk about a few"}, {"start": 28.32, "end": 31.64, "text": " amazing techniques in light transport simulations."}, {"start": 31.64, "end": 36.44, "text": " But first things first, when talking about physical simulations, we are talking about a technique"}, {"start": 36.44, "end": 39.28, "text": " that computes how things move."}, {"start": 39.28, "end": 44.72, "text": " Then, we typically run a light simulation program that computes how things look."}, {"start": 44.72, "end": 49.92, "text": " The two are completely independent, which means that it is possible that the physical behavior"}, {"start": 49.92, "end": 56.32, "text": " of bread breaking here is correct, but the bread itself does not look perfectly realistic."}, {"start": 56.32, "end": 61.52, "text": " The second part depends on the quality of the light simulation and the materials used"}, {"start": 61.52, "end": 62.52, "text": " there."}, {"start": 62.52, "end": 67.72, "text": " We can create such an image by simulating the path of millions and millions of light rays."}, {"start": 67.72, "end": 73.32, "text": " And initially, this image will look noisy, and as we add more and more rays, this image"}, {"start": 73.32, "end": 76.56, "text": " will slowly clean up over time."}, {"start": 76.56, "end": 81.56, "text": " If we don't have a well-optimized program, this can take from hours to days to compute."}, {"start": 81.56, "end": 86.76, "text": " We can speed up this process by carefully choosing where to shoot these rays, and this is a"}, {"start": 86.76, "end": 89.72, "text": " technique that is called important sampling."}, {"start": 89.72, "end": 96.44, "text": " But then, around 1993, an amazing paper appeared by the name Bi-directional Path Tracing that"}, {"start": 96.44, "end": 103.08, "text": " proposed that we don't just start building light paths from one direction, but two instead."}, {"start": 103.08, "end": 108.0, "text": " One from the camera, and one from the light source, and then connect them."}, {"start": 108.0, "end": 112.48, "text": " This significantly improved the efficiency of these light simulations, however, it opened"}, {"start": 112.48, "end": 114.48, "text": " up a new kind of worms."}, {"start": 114.48, "end": 119.56, "text": " There are many different ways of connecting these paths, which leads to mathematical difficulties."}, {"start": 119.56, "end": 125.2, "text": " For instance, we have to specify the probability of a light path forming, but what do we do"}, {"start": 125.2, "end": 128.52, "text": " if there are multiple ways of producing this light path?"}, {"start": 128.52, "end": 131.16, "text": " There will be multiple probabilities."}, {"start": 131.16, "end": 133.44, "text": " What do we do with all this stuff?"}, {"start": 133.44, "end": 139.16, "text": " To address this, Eric Vich described a magical algorithm in his thesis, and thus, multiple"}, {"start": 139.16, "end": 141.32, "text": " important sampling was born."}, {"start": 141.32, "end": 146.35999999999999, "text": " I can say without exaggeration that this is one of the most powerful techniques in all"}, {"start": 146.35999999999999, "end": 148.72, "text": " photorealistic rendering research."}, {"start": 148.72, "end": 155.32, "text": " What multiple important sampling, or from now on, MIS in short does, is combine these multiple"}, {"start": 155.32, "end": 160.44, "text": " sampling techniques in a way that accentuates the strength of each of them."}, {"start": 160.44, "end": 165.48, "text": " For instance, you can see the image created by one sampling technique here, and the image"}, {"start": 165.48, "end": 168.52, "text": " from a different one here."}, {"start": 168.52, "end": 174.6, "text": " Both of them are quite noisy, but if we combine them with MIS, we get this instead in the"}, {"start": 174.6, "end": 176.6, "text": " same amount of time."}, {"start": 176.6, "end": 179.44, "text": " A much smoother, less noisy image."}, {"start": 179.44, "end": 184.6, "text": " In many cases, this can truly bring down the computation times from several hours to"}, {"start": 184.6, "end": 186.44, "text": " several minutes."}, {"start": 186.44, "end": 187.92, "text": " Absolute witchcraft."}, {"start": 187.92, "end": 193.04, "text": " Later, even more advanced techniques appeared to accelerate the speed of these light simulation"}, {"start": 193.04, "end": 194.2, "text": " programs."}, {"start": 194.2, "end": 199.04, "text": " For instance, it is now not only possible to compute light transport between points in"}, {"start": 199.04, "end": 203.39999999999998, "text": " space, but between a point and a beam instead."}, {"start": 203.39999999999998, "end": 207.56, "text": " You see the evolution of an image using this photom beam-based technique."}, {"start": 207.56, "end": 213.0, "text": " This way, we can get rid of the point-based noise and get a much, much more appealing rendering"}, {"start": 213.0, "end": 214.0, "text": " process."}, {"start": 214.0, "end": 219.72, "text": " The lead author of this beam paper is Vojta Kiyaros, who, three years later, ended up being"}, {"start": 219.72, "end": 224.16, "text": " the head of the rendering group at the Amazing Disney Research Lab."}, {"start": 224.16, "end": 228.68, "text": " Around that time, he also hired me to work with him at Disney on a project I can't talk"}, {"start": 228.68, "end": 233.88, "text": " about, which was an incredible and life-changing experience, and I will be forever grateful"}, {"start": 233.88, "end": 235.52, "text": " for his kindness."}, {"start": 235.52, "end": 240.64, "text": " By the way, he is now a professor at the Dartmouth University and just keeps pumping out one"}, {"start": 240.64, "end": 243.16, "text": " killer paper after another."}, {"start": 243.16, "end": 248.4, "text": " So as you might have guessed, if it is possible to compute light transport between two points,"}, {"start": 248.4, "end": 253.96, "text": " the point and a beam, later it became possible to do this between two beams."}, {"start": 253.96, "end": 258.04, "text": " None of these are for the faint of the heart, but it works really well."}, {"start": 258.04, "end": 259.88, "text": " But there is a huge problem."}, {"start": 259.88, "end": 265.24, "text": " These techniques work with different dimensionalities, or, in other words, they estimate the final"}, {"start": 265.24, "end": 270.76, "text": " result so differently that they cannot be combined with multiple importance sampling."}, {"start": 270.76, "end": 275.08, "text": " That is, indeed a problem, because all of these have completely different strengths and"}, {"start": 275.08, "end": 276.56, "text": " weaknesses."}, {"start": 276.56, "end": 281.2, "text": " And now, hold on to your papers because we have finally arrived to the main paper of"}, {"start": 281.2, "end": 282.4, "text": " this episode."}, {"start": 282.4, "end": 289.15999999999997, "text": " It bears the name UPBP, which stands for unifying points, beams, and paths, and it formulates"}, {"start": 289.15999999999997, "end": 293.56, "text": " multiple importance sampling between all of these different kinds of light transport"}, {"start": 293.56, "end": 294.56, "text": " simulations."}, {"start": 294.56, "end": 299.56, "text": " Basically, what we can do with this is throw every advanced simulation program we can think"}, {"start": 299.56, "end": 305.68, "text": " of together and out comes a super powerful version of them that combines all their strengths"}, {"start": 305.68, "end": 309.08, "text": " and nullifies nearly all of their weaknesses."}, {"start": 309.08, "end": 311.16, "text": " It is absolutely unreal."}, {"start": 311.16, "end": 316.76, "text": " Here, you see four completely different algorithms running, and as you can see, they are noisy"}, {"start": 316.76, "end": 319.48, "text": " and smooth at very different places."}, {"start": 319.48, "end": 323.36, "text": " They are good at computing different kinds of light transport."}, {"start": 323.36, "end": 329.52, "text": " And now, hold on to your papers because this final result with the UPBP technique is this."}, {"start": 329.52, "end": 331.52, "text": " Wow!"}, {"start": 331.52, "end": 333.88, "text": " Light transport on steroids."}, {"start": 333.88, "end": 337.96, "text": " While we look at some more results, I will note that in my opinion, this is one of the"}, {"start": 337.96, "end": 341.35999999999996, "text": " best papers ever written in light transport research."}, {"start": 341.35999999999996, "end": 345.35999999999996, "text": " The crazy thing is that I hardly ever hear anybody talk about it."}, {"start": 345.35999999999996, "end": 351.0, "text": " If any paper would deserve a bit more attention, so I hope this video will help with that."}, {"start": 351.0, "end": 356.24, "text": " And I would like to dedicate this video to Jerozlav Krzywanek, the first author of this absolutely"}, {"start": 356.24, "end": 360.76, "text": " amazing paper who has tragically passed away a few months ago."}, {"start": 360.76, "end": 366.08, "text": " In my memories, I think of him as the true king of multiple important sampling, and I"}, {"start": 366.08, "end": 368.92, "text": " hope that now you do too."}, {"start": 368.92, "end": 372.48, "text": " Note that MIS is not limited to light transport algorithms."}, {"start": 372.48, "end": 377.72, "text": " It is a general concept that can be used together with a mathematical technique called Monte Carlo"}, {"start": 377.72, "end": 383.52, "text": " integration, which is used pretty much everywhere from finding out what an electromagnetic field"}, {"start": 383.52, "end": 387.71999999999997, "text": " looks like to financial modeling and much, much more."}, {"start": 387.71999999999997, "end": 392.76, "text": " If you have anything to do with Monte Carlo integration, please read Eric Vichy's thesis"}, {"start": 392.76, "end": 398.24, "text": " and this paper, and if you feel that it is a good fit, try to incorporate multiple important"}, {"start": 398.24, "end": 400.24, "text": " sampling into your system."}, {"start": 400.24, "end": 401.59999999999997, "text": " You'll be glad you did."}, {"start": 401.59999999999997, "end": 406.52, "text": " Also, we have recorded my lectures of a master-level course on light transport simulations"}, {"start": 406.52, "end": 408.79999999999995, "text": " at the Technical University of Vienna."}, {"start": 408.8, "end": 414.0, "text": " In this course, we write such a light simulation program from scratch, and it is available free"}, {"start": 414.0, "end": 418.36, "text": " of charge for everyone, no strings attached, so make sure to click the link in the video"}, {"start": 418.36, "end": 420.12, "text": " description to get started."}, {"start": 420.12, "end": 426.04, "text": " Additionally, I have implemented a small, one-dimensional example of MIS if you wish to pick it up and"}, {"start": 426.04, "end": 429.44, "text": " try it, that's also available in the video description."}, {"start": 429.44, "end": 434.64, "text": " While talking about the Technical University of Vienna, we are hiring for a PhD and a postdoc"}, {"start": 434.64, "end": 435.64, "text": " position."}, {"start": 435.64, "end": 440.64, "text": " The call here about lighting simulation for architectural design is advised by my PhD"}, {"start": 440.64, "end": 444.28, "text": " advisor, Mikhail Vima, who I highly recommend."}, {"start": 444.28, "end": 448.15999999999997, "text": " Apply now if you feel qualified, the link is in the video description."}, {"start": 448.16, "end": 474.92, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=B1Dk_9k6l08
This Neural Network Turns Videos Into 60 FPS!
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers Their blog post on hyperparameter optimization is available here: https://www.wandb.com/articles/find-the-most-important-hyperparameters-in-seconds 📝 The paper "Depth-Aware Video Frame Interpolation" and its source code are available here: https://sites.google.com/view/wenbobao/dain The promised playlist with a TON of interpolated videos: https://www.youtube.com/playlist?list=PLDi8wAVyouYNDl7gGdSbWKdRxIogfeD3H 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Far Cry video source by N00MKRAD: https://www.youtube.com/watch?v=tW0cvyut7Gk&list=PLDi8wAVyouYNDl7gGdSbWKdRxIogfeD3H&index=20 Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DainApp
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. With today's camera and graphics technology, we can enjoy smooth and creamy videos on our devices that were created with 60 frames per second. I also make each of these videos using 60 frames per second, however, it almost always happens that our encounter paper videos from 24 to 30 frames per second or FPS in short. In this case, I put them in my video editor that has a 60 FPS timeline, so half or even more of these frames will not provide any new information. As we try to slow down the videos for some nice slow motion action, this ratio is even worse, creating an extremely choppy output video because we have huge gaps between these frames. So does this mean that there is nothing we can do and have to put up with this choppy footage? No, not at all. Earlier, we discussed two potential techniques to remedy this issue. One was frame blending, which simply computes the average of two consecutive images and presents that as a solution. This helps a little for simpler cases, but this technique is unable to produce new information. Optical Flow is a much more sophisticated method that is very capable as it tries to predict the motion that takes place between these frames. This can kind of produce new information and I use this in the video series on a regular basis, but the output footage also has to be carefully inspected for unwanted artifacts, which are relatively common occurrence. Now, our season follow scholars will immediately note that we have a lot of high frame rate videos on the internet. Why not delete some of the in-between frames, give the choppy and the smooth videos to a neural network and teach it to fill in the gaps. After the lengthy training process, it should be able to complete these choppy videos properly. So, is that true? Yes, but note that there are plenty of techniques out there that already do this, so what is new in this paper? Well, this work does that and much more. We will have a look at the results which are absolutely incredible, but to be able to appreciate what is going on, let me quickly show you this. The design of this neural network tries to produce four different kinds of data to fill in these images. One is optical flows, which is part of previous solutions too, but two, it also produces a depth map that tells us how far different parts of the image are from the camera. This is of utmost importance because if we rotate this camera around, previously occluded objects suddenly become visible and we need proper intelligence to be able to recognize this and to fill in this kind of missing information. This is what the contextual extraction step is for, which drastically improves the quality of the reconstruction, and finally, the interpolation kernels are also learned, which gives it more knowledge as to what data to take from the previous and the next frame. Since it also has a contextual understanding of these images, one would think that it needs a ton of neighboring frames to understand what is going on, which surprisingly is not the case at all. All it needs is just the two neighboring images. So, after doing all this work, it better be worth it, right? Let's have a look at some results. Hold on to your papers and in the meantime, look at how smooth and creamy the outputs are. Love it. Because it also deals with contextual information, if you wish to feel like a real scholar, you can gaze at regions where the occlusion situation changes rapidly and see how well it feels in this kind of information. Hand real. So, how does one show that the technique is quite robust? Well, by producing and showing it off on tons and tons of footage, and that is exactly what the authors did. I put a link to a huge playlist with 33 different videos in the description, so you can have a look at how well this works on a wide variety of genres. Now, of course, this is not the first technique for learning-based frame interpolation, so let's see how it stacks up against the competition. Wow. This is quite a value proposition, because depending on the dataset, it comes out first and second place on most examples. The PSNR is the peak signal to noise ratio, while the SSIM is the structure of similarity metric, both of which measure how well the algorithm reconstructs these details compared to the ground truth and both are subject to maximization. Note that none of them are linear, therefore, even a small difference in these numbers can mean a significant difference. I think we are now at a point where these tools are getting so much better than their handcrafted optical flow rivals that I think they will quickly find their way to production software. I cannot wait. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you which hyper-parameters to tweak to improve your model performance. Also, weight and biases provide tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. They don't lock you in and if you are an academic or have an open-source project, you can use their tools for free. It is really as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 3.84, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 3.84, "end": 9.92, "text": " With today's camera and graphics technology, we can enjoy smooth and creamy videos on our devices"}, {"start": 9.92, "end": 12.8, "text": " that were created with 60 frames per second."}, {"start": 12.8, "end": 16.56, "text": " I also make each of these videos using 60 frames per second,"}, {"start": 16.56, "end": 23.92, "text": " however, it almost always happens that our encounter paper videos from 24 to 30 frames per second"}, {"start": 23.92, "end": 25.6, "text": " or FPS in short."}, {"start": 25.6, "end": 32.24, "text": " In this case, I put them in my video editor that has a 60 FPS timeline, so half or even more of"}, {"start": 32.24, "end": 35.68, "text": " these frames will not provide any new information."}, {"start": 35.68, "end": 39.760000000000005, "text": " As we try to slow down the videos for some nice slow motion action,"}, {"start": 39.760000000000005, "end": 44.480000000000004, "text": " this ratio is even worse, creating an extremely choppy output video"}, {"start": 44.480000000000004, "end": 46.72, "text": " because we have huge gaps between these frames."}, {"start": 47.44, "end": 52.72, "text": " So does this mean that there is nothing we can do and have to put up with this choppy footage?"}, {"start": 52.72, "end": 54.88, "text": " No, not at all."}, {"start": 54.88, "end": 58.24, "text": " Earlier, we discussed two potential techniques to remedy this issue."}, {"start": 58.88, "end": 64.48, "text": " One was frame blending, which simply computes the average of two consecutive images"}, {"start": 64.48, "end": 66.24000000000001, "text": " and presents that as a solution."}, {"start": 66.88, "end": 72.56, "text": " This helps a little for simpler cases, but this technique is unable to produce new information."}, {"start": 73.44, "end": 78.80000000000001, "text": " Optical Flow is a much more sophisticated method that is very capable as it tries to predict"}, {"start": 78.80000000000001, "end": 81.36, "text": " the motion that takes place between these frames."}, {"start": 81.36, "end": 87.68, "text": " This can kind of produce new information and I use this in the video series on a regular basis,"}, {"start": 87.68, "end": 93.28, "text": " but the output footage also has to be carefully inspected for unwanted artifacts,"}, {"start": 93.28, "end": 96.0, "text": " which are relatively common occurrence."}, {"start": 96.0, "end": 101.92, "text": " Now, our season follow scholars will immediately note that we have a lot of high frame rate videos"}, {"start": 101.92, "end": 108.48, "text": " on the internet. Why not delete some of the in-between frames, give the choppy and the smooth videos"}, {"start": 108.48, "end": 112.4, "text": " to a neural network and teach it to fill in the gaps."}, {"start": 112.4, "end": 117.2, "text": " After the lengthy training process, it should be able to complete these choppy videos properly."}, {"start": 117.76, "end": 119.2, "text": " So, is that true?"}, {"start": 120.0, "end": 124.72, "text": " Yes, but note that there are plenty of techniques out there that already do this,"}, {"start": 124.72, "end": 126.32000000000001, "text": " so what is new in this paper?"}, {"start": 126.96000000000001, "end": 130.8, "text": " Well, this work does that and much more."}, {"start": 130.8, "end": 134.0, "text": " We will have a look at the results which are absolutely incredible,"}, {"start": 134.0, "end": 138.16, "text": " but to be able to appreciate what is going on, let me quickly show you this."}, {"start": 138.8, "end": 144.08, "text": " The design of this neural network tries to produce four different kinds of data to fill in these images."}, {"start": 144.88, "end": 148.96, "text": " One is optical flows, which is part of previous solutions too,"}, {"start": 148.96, "end": 155.6, "text": " but two, it also produces a depth map that tells us how far different parts of the image are from"}, {"start": 155.6, "end": 161.6, "text": " the camera. This is of utmost importance because if we rotate this camera around, previously"}, {"start": 161.6, "end": 167.12, "text": " occluded objects suddenly become visible and we need proper intelligence to be able to recognize"}, {"start": 167.12, "end": 172.72, "text": " this and to fill in this kind of missing information. This is what the contextual extraction"}, {"start": 172.72, "end": 178.24, "text": " step is for, which drastically improves the quality of the reconstruction, and finally,"}, {"start": 178.24, "end": 183.68, "text": " the interpolation kernels are also learned, which gives it more knowledge as to what data to take"}, {"start": 183.68, "end": 190.07999999999998, "text": " from the previous and the next frame. Since it also has a contextual understanding of these images,"}, {"start": 190.08, "end": 194.96, "text": " one would think that it needs a ton of neighboring frames to understand what is going on,"}, {"start": 194.96, "end": 201.20000000000002, "text": " which surprisingly is not the case at all. All it needs is just the two neighboring images."}, {"start": 201.92000000000002, "end": 208.0, "text": " So, after doing all this work, it better be worth it, right? Let's have a look at some results."}, {"start": 208.0, "end": 213.28, "text": " Hold on to your papers and in the meantime, look at how smooth and creamy the outputs are."}, {"start": 213.28, "end": 220.96, "text": " Love it. Because it also deals with contextual information, if you wish to feel like a real"}, {"start": 220.96, "end": 227.44, "text": " scholar, you can gaze at regions where the occlusion situation changes rapidly and see how well it"}, {"start": 227.44, "end": 238.4, "text": " feels in this kind of information. Hand real. So, how does one show that the technique is quite robust?"}, {"start": 238.4, "end": 243.76000000000002, "text": " Well, by producing and showing it off on tons and tons of footage, and that is exactly what the"}, {"start": 243.76000000000002, "end": 248.72, "text": " authors did. I put a link to a huge playlist with 33 different videos in the description,"}, {"start": 248.72, "end": 252.64000000000001, "text": " so you can have a look at how well this works on a wide variety of genres."}, {"start": 253.28, "end": 257.92, "text": " Now, of course, this is not the first technique for learning-based frame interpolation,"}, {"start": 257.92, "end": 261.2, "text": " so let's see how it stacks up against the competition."}, {"start": 262.64, "end": 267.36, "text": " Wow. This is quite a value proposition, because depending on the dataset,"}, {"start": 267.36, "end": 270.8, "text": " it comes out first and second place on most examples."}, {"start": 273.92, "end": 280.0, "text": " The PSNR is the peak signal to noise ratio, while the SSIM is the structure of similarity metric,"}, {"start": 280.0, "end": 285.6, "text": " both of which measure how well the algorithm reconstructs these details compared to the"}, {"start": 285.6, "end": 291.52000000000004, "text": " ground truth and both are subject to maximization. Note that none of them are linear,"}, {"start": 291.52000000000004, "end": 296.56, "text": " therefore, even a small difference in these numbers can mean a significant difference."}, {"start": 296.56, "end": 301.92, "text": " I think we are now at a point where these tools are getting so much better than their handcrafted"}, {"start": 301.92, "end": 306.88, "text": " optical flow rivals that I think they will quickly find their way to production software."}, {"start": 306.88, "end": 313.52, "text": " I cannot wait. What a time to be alive. This episode has been supported by weights and biases."}, {"start": 313.52, "end": 318.88, "text": " In this post, they show you which hyper-parameters to tweak to improve your model performance."}, {"start": 318.88, "end": 324.16, "text": " Also, weight and biases provide tools to track your experiments in your deep learning projects."}, {"start": 324.16, "end": 327.68, "text": " Their system is designed to save you a ton of time and money,"}, {"start": 327.68, "end": 333.92, "text": " and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research,"}, {"start": 333.92, "end": 339.76000000000005, "text": " GitHub, and more. They don't lock you in and if you are an academic or have an open-source project,"}, {"start": 339.76000000000005, "end": 344.0, "text": " you can use their tools for free. It is really as good as it gets."}, {"start": 344.0, "end": 350.32000000000005, "text": " Make sure to visit them through wnb.com slash papers, or just click the link in the video description,"}, {"start": 350.32, "end": 355.76, "text": " and you can get a free demo today. Our thanks to weights and biases for their long-standing support"}, {"start": 355.76, "end": 360.15999999999997, "text": " and helping us make better videos for you. Thanks for watching and for your generous support,"}, {"start": 360.16, "end": 388.32000000000005, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Ks7wDYsN4yM
Neural Portrait Relighting is Here!
❤️ Check out Weights & Biases here and sign up for a free demo here: https://www.wandb.com/papers Their blog post and example project are available here: - https://www.wandb.com/articles/exploring-gradients - https://colab.research.google.com/drive/1bsoWY8g0DkxAzVEXRigrdqRZlq44QwmQ 📝 The paper "Deep Single Image Portrait Relighting" is available here: https://zhhoper.github.io/dpr.html ☀️ Our "Separable Subsurface Scattering" paper with source code is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Toulinit Papers with Karo Zsolnai-Fehir. In computer graphics, when we are talking about portrait-relighting, we mean a technique that is able to look at an image and change the lighting and maybe even the materials or geometry after this image has been taken. This is a very challenging endeavor. So, can Neuron-Atworks put a dent into this problem and give us something new and better? You bet. Examples that you see here are done with this new work that uses a learning-based technique and is able to change the lighting for human portraits and only requires one input image. You see, normally, using methods in computer graphics to relate these images would require trying to find out what the geometry of the face, materials and lighting is from the image and then we can change the lighting or other parameters, run a light simulation program and hope that the estimations are good enough to make it realistic. However, if we wish to use Neuron-Atworks to learn the concept of portrait-relighting, of course, we need quite a bit of training data. Since this is not trivially available, the paper contains a new dataset with over 25,000 portrait images that are relit in five different ways. It also proposes a Neuron-Atworks structure that can learn this reliting operation efficiently. It is shaped a bit like an hourglass and contains an encoder and decoder parts. The encoder part takes an image as an input and estimates what lighting could have been used to produce it while the decoder part is where we can play around with changing the lighting and it will generate the appropriate image that this kind of lighting would produce. What you see here are skip connections that are useful to save insights from different abstraction levels and transfer them from the encoder to the decoder network. So what does this mean exactly? Intuitively, it is a bit like using the lighting estimator network to teach the image generator what it has learned. So do we really lose a lot if we skip the skip connections? Well, quite a bit. Have a look here. The image on the left shows the result using all skip connections while as we traverse to the right we see the results omitting them. These connections indeed make the profound difference. Let's be thankful for the authors of the paper as putting together such a data set and trying to get an understanding as to what network architectures it would require to get great results like this takes quite a bit of work. I'd like to make a note about modeling subsurface light transport. This is a piece of footage from our earlier paper that we wrote as a collaboration with the Activision Blizzard Company and you can see here that including this indeed makes a profound difference in the looks of a human face. I cannot wait to see some follow-up papers that take more advanced effects like this into consideration for relighting as well. If you wish to find out more about this work make sure to click the link in the video description. This episode has been supported by weights and biases. Here you see a write-up of theirs where they explain how to visualize gradients running through your models and illustrate it through the example of predicting protein structure. They also have a live example that you can try. Weight and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI to your research, Stanford and Berkeley. Make sure to visit them through www.b.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is Toulinit Papers with Karo Zsolnai-Fehir."}, {"start": 4.36, "end": 9.040000000000001, "text": " In computer graphics, when we are talking about portrait-relighting, we mean a technique"}, {"start": 9.040000000000001, "end": 14.76, "text": " that is able to look at an image and change the lighting and maybe even the materials"}, {"start": 14.76, "end": 18.28, "text": " or geometry after this image has been taken."}, {"start": 18.28, "end": 20.36, "text": " This is a very challenging endeavor."}, {"start": 20.36, "end": 26.68, "text": " So, can Neuron-Atworks put a dent into this problem and give us something new and better?"}, {"start": 26.68, "end": 27.68, "text": " You bet."}, {"start": 27.68, "end": 32.84, "text": " Examples that you see here are done with this new work that uses a learning-based technique"}, {"start": 32.84, "end": 39.96, "text": " and is able to change the lighting for human portraits and only requires one input image."}, {"start": 39.96, "end": 45.36, "text": " You see, normally, using methods in computer graphics to relate these images would require"}, {"start": 45.36, "end": 50.480000000000004, "text": " trying to find out what the geometry of the face, materials and lighting is from the"}, {"start": 50.480000000000004, "end": 56.4, "text": " image and then we can change the lighting or other parameters, run a light simulation"}, {"start": 56.4, "end": 61.0, "text": " program and hope that the estimations are good enough to make it realistic."}, {"start": 61.0, "end": 66.08, "text": " However, if we wish to use Neuron-Atworks to learn the concept of portrait-relighting,"}, {"start": 66.08, "end": 69.64, "text": " of course, we need quite a bit of training data."}, {"start": 69.64, "end": 75.48, "text": " Since this is not trivially available, the paper contains a new dataset with over 25,000"}, {"start": 75.48, "end": 79.44, "text": " portrait images that are relit in five different ways."}, {"start": 79.44, "end": 85.44, "text": " It also proposes a Neuron-Atworks structure that can learn this reliting operation efficiently."}, {"start": 85.44, "end": 90.92, "text": " It is shaped a bit like an hourglass and contains an encoder and decoder parts."}, {"start": 90.92, "end": 95.96, "text": " The encoder part takes an image as an input and estimates what lighting could have been"}, {"start": 95.96, "end": 101.44, "text": " used to produce it while the decoder part is where we can play around with changing"}, {"start": 101.44, "end": 107.88, "text": " the lighting and it will generate the appropriate image that this kind of lighting would produce."}, {"start": 107.88, "end": 113.03999999999999, "text": " What you see here are skip connections that are useful to save insights from different"}, {"start": 113.04, "end": 118.52000000000001, "text": " abstraction levels and transfer them from the encoder to the decoder network."}, {"start": 118.52000000000001, "end": 120.96000000000001, "text": " So what does this mean exactly?"}, {"start": 120.96000000000001, "end": 126.60000000000001, "text": " Intuitively, it is a bit like using the lighting estimator network to teach the image generator"}, {"start": 126.60000000000001, "end": 128.56, "text": " what it has learned."}, {"start": 128.56, "end": 133.0, "text": " So do we really lose a lot if we skip the skip connections?"}, {"start": 133.0, "end": 135.04000000000002, "text": " Well, quite a bit."}, {"start": 135.04000000000002, "end": 136.04000000000002, "text": " Have a look here."}, {"start": 136.04000000000002, "end": 140.64000000000001, "text": " The image on the left shows the result using all skip connections while as we traverse"}, {"start": 140.64, "end": 143.67999999999998, "text": " to the right we see the results omitting them."}, {"start": 143.67999999999998, "end": 147.48, "text": " These connections indeed make the profound difference."}, {"start": 147.48, "end": 152.2, "text": " Let's be thankful for the authors of the paper as putting together such a data set and"}, {"start": 152.2, "end": 157.07999999999998, "text": " trying to get an understanding as to what network architectures it would require to get"}, {"start": 157.07999999999998, "end": 160.64, "text": " great results like this takes quite a bit of work."}, {"start": 160.64, "end": 164.51999999999998, "text": " I'd like to make a note about modeling subsurface light transport."}, {"start": 164.51999999999998, "end": 168.72, "text": " This is a piece of footage from our earlier paper that we wrote as a collaboration with"}, {"start": 168.72, "end": 173.92, "text": " the Activision Blizzard Company and you can see here that including this indeed makes"}, {"start": 173.92, "end": 177.52, "text": " a profound difference in the looks of a human face."}, {"start": 177.52, "end": 182.72, "text": " I cannot wait to see some follow-up papers that take more advanced effects like this into"}, {"start": 182.72, "end": 185.32, "text": " consideration for relighting as well."}, {"start": 185.32, "end": 190.04, "text": " If you wish to find out more about this work make sure to click the link in the video description."}, {"start": 190.04, "end": 193.28, "text": " This episode has been supported by weights and biases."}, {"start": 193.28, "end": 198.36, "text": " Here you see a write-up of theirs where they explain how to visualize gradients running"}, {"start": 198.36, "end": 203.64000000000001, "text": " through your models and illustrate it through the example of predicting protein structure."}, {"start": 203.64000000000001, "end": 206.64000000000001, "text": " They also have a live example that you can try."}, {"start": 206.64000000000001, "end": 211.12, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 211.12, "end": 216.24, "text": " It can save you a ton of time and money in these projects and is being used by OpenAI"}, {"start": 216.24, "end": 219.4, "text": " to your research, Stanford and Berkeley."}, {"start": 219.4, "end": 224.56, "text": " Make sure to visit them through www.b.com slash papers or just click the link in the video"}, {"start": 224.56, "end": 227.64000000000001, "text": " description and you can get a free demo today."}, {"start": 227.64, "end": 231.39999999999998, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 231.4, "end": 260.04, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=62Q1NL4k8cI
OpenAI Performs Surgery On A Neural Network to Play DOTA 2
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "Dota 2 with Large Scale Deep Reinforcement Learning" from #OpenAI is available here: https://arxiv.org/abs/1912.06680 https://openai.com/projects/five/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DOTA2
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fahir. Finally, the full research paper has appeared on OpenAI 5, which is an AI that plays Dota 2, a multiplayer online battle arena game with a huge cold following. And you may not expect this, it is not only as good as some of the best players in the world, but it also describes a surgery technique that sounds quite unexpected and I promise to tell you what it is later during this video. This game is a nightmare for any AI to play because of 3 main reasons. One, it requires long-term strategic planning where it is possible that we make one bad decision, then a thousand good ones, and we still lose the game in the end. Finding out which decision led to this loss is immensely difficult, often even for humans. Two, we have imperfect information, meaning that we can only see what our units and buildings can see. And three, even though these learning agents don't look at the pixels of the game, but they see the world as a big bunch of numbers, there is just too much information to look at and too many decisions to make compared to chess or go, or almost anything else. Despite these difficulties, in 2017, OpenAI showed us an initial version of their agent that was able to play one versus one games with only one hero and was able to reliably beat Dandy, a world champion player. That was quite an achievement, however, of course, this was meant to be a stepping stone towards something much bigger that is playing the real Dota 2. And just two years later, an newer version named OpenAI 5 has appeared, defeated the Dota 2 World Champions and beat 99.4% of human players during an online event that ran for multiple days. Many voices said that this would never happen, so two years to pull this off after the first version, I think was an absolute miracle. Bravo! Now, note that even this version has two key limitations. One, in a normal game, we can choose from a pool of 117 heroes where this system supports 17 of them and two items that allow the player to control multiple characters at once have been disabled. If I remember correctly from a previous post of theirs, invisibility effects are also neglected because the algorithm is not looking at pixels, it would either always have this information shown as a bunch of numbers or never. Neither of these would be good design decisions, so thus invisibility is not part of this technique. Fortunately, the paper is now available, so I was really excited to look under the hood for some more details. So first, as I promised, what is this surgery thing about? You see, the training of the neural network part of this algorithm took no less than 10 months. Now, just imagine forgetting to feed an important piece of information into the system or finding a bug while training is underway. In cases like this, normally we would have to abort the training and start again. If we have a new idea as to how to improve the system, again we have to abort the training and start again. If a new version of Dota 2 comes out with some changes, you guys try it, we start again. This would be okay if the training took from the order of minutes to hours, but we are talking 10 months here. This is clearly not practical. So, what if there would be a technique that would be able to apply all of these changes to a training process that is already underway? Well, this is what the surgery technique is about. Here with the blue curve, you see the agents keyerating improving over time and the red lines with the black triangles show us the dates for the surgeries. The author's note that over the 10 month training process, they have performed approximately one surgery per two weeks. It seems that getting a doctorate in machine learning research is getting a whole new meaning. Some of them indeed made an immediate difference while others seemingly not so much. So how do we assess how potent these surgeries were? Did they give the agent superpowers? Well, have a look at the rerun part here, which is the final Frankenstein's monster agent containing the result of all the surgeries retrained from scratch. And just look at how quickly it is trained and not only that, but it shoots even higher than the original agent. Absolute madness. Apparently, open AI is employing some proper surgeons over there at their lab. I love it. Interestingly, this is not the only time I've seen the word surgery used in the computer sciences outside of medicine. A legendary mathematician named Gregory Perelman, who proved the Poincare conjecture also performed a mathematical technique that he called surgery. What's more, we even talked about simulating weightlifting and how a simulated AI agent will walk after getting hamstrung and, you guessed it right, undergoing surgery to fix it. What a time to be alive. And again, an important lesson is that in this project, open AI is not spending so much money and resources just to play video games. Dota 2 is a wonderful test bed to see how their AI compares to humans at complex tasks that involve strategy and teamwork. However, the ultimate goal is to reuse parts of this system for other complex problems outside of video games. For instance, the algorithm that you've seen here today can also do this. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. Unlike entry-level hosting services, Linode gives you full back-end access to your server, which is your step up to powerful, fast, fully configurable cloud computing. Linode also has one click apps that streamline your ability to deploy websites, personal VPNs, game servers, and more. If you need something as small as a personal online portfolio, Linode has your back, and if you need to manage tons of clients' websites and reliably serve them to millions of visitors, Linode can do that too. What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. If only I had access to a tool like this while I was working on my PhD studies. To receive $20 credit in your new Linode account, visit linode.com slash papers, or just click the link in the video description and give it a try today. Thanks for your support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fahir."}, {"start": 4.5600000000000005, "end": 11.32, "text": " Finally, the full research paper has appeared on OpenAI 5, which is an AI that plays Dota 2,"}, {"start": 11.32, "end": 15.76, "text": " a multiplayer online battle arena game with a huge cold following."}, {"start": 15.76, "end": 20.2, "text": " And you may not expect this, it is not only as good as some of the best players in the"}, {"start": 20.2, "end": 26.64, "text": " world, but it also describes a surgery technique that sounds quite unexpected and I promise"}, {"start": 26.64, "end": 29.72, "text": " to tell you what it is later during this video."}, {"start": 29.72, "end": 34.64, "text": " This game is a nightmare for any AI to play because of 3 main reasons."}, {"start": 34.64, "end": 39.76, "text": " One, it requires long-term strategic planning where it is possible that we make one bad"}, {"start": 39.76, "end": 45.56, "text": " decision, then a thousand good ones, and we still lose the game in the end."}, {"start": 45.56, "end": 50.92, "text": " Finding out which decision led to this loss is immensely difficult, often even for humans."}, {"start": 50.92, "end": 56.96, "text": " Two, we have imperfect information, meaning that we can only see what our units and buildings"}, {"start": 56.96, "end": 57.96, "text": " can see."}, {"start": 57.96, "end": 63.08, "text": " And three, even though these learning agents don't look at the pixels of the game, but they"}, {"start": 63.08, "end": 68.0, "text": " see the world as a big bunch of numbers, there is just too much information to look at"}, {"start": 68.0, "end": 74.0, "text": " and too many decisions to make compared to chess or go, or almost anything else."}, {"start": 74.0, "end": 79.64, "text": " Despite these difficulties, in 2017, OpenAI showed us an initial version of their agent"}, {"start": 79.64, "end": 85.28, "text": " that was able to play one versus one games with only one hero and was able to reliably"}, {"start": 85.28, "end": 88.24, "text": " beat Dandy, a world champion player."}, {"start": 88.24, "end": 93.4, "text": " That was quite an achievement, however, of course, this was meant to be a stepping stone"}, {"start": 93.4, "end": 98.2, "text": " towards something much bigger that is playing the real Dota 2."}, {"start": 98.2, "end": 104.44, "text": " And just two years later, an newer version named OpenAI 5 has appeared, defeated the Dota"}, {"start": 104.44, "end": 112.28, "text": " 2 World Champions and beat 99.4% of human players during an online event that ran for multiple"}, {"start": 112.28, "end": 113.28, "text": " days."}, {"start": 113.28, "end": 118.12, "text": " Many voices said that this would never happen, so two years to pull this off after the"}, {"start": 118.12, "end": 121.84, "text": " first version, I think was an absolute miracle."}, {"start": 121.84, "end": 122.84, "text": " Bravo!"}, {"start": 122.84, "end": 126.92, "text": " Now, note that even this version has two key limitations."}, {"start": 126.92, "end": 133.16, "text": " One, in a normal game, we can choose from a pool of 117 heroes where this system supports"}, {"start": 133.16, "end": 139.64, "text": " 17 of them and two items that allow the player to control multiple characters at once"}, {"start": 139.64, "end": 141.36, "text": " have been disabled."}, {"start": 141.36, "end": 146.52, "text": " If I remember correctly from a previous post of theirs, invisibility effects are also neglected"}, {"start": 146.52, "end": 151.56, "text": " because the algorithm is not looking at pixels, it would either always have this information"}, {"start": 151.56, "end": 154.92000000000002, "text": " shown as a bunch of numbers or never."}, {"start": 154.92000000000002, "end": 160.92000000000002, "text": " Neither of these would be good design decisions, so thus invisibility is not part of this technique."}, {"start": 160.92000000000002, "end": 165.92000000000002, "text": " Fortunately, the paper is now available, so I was really excited to look under the hood"}, {"start": 165.92000000000002, "end": 167.68, "text": " for some more details."}, {"start": 167.68, "end": 172.04000000000002, "text": " So first, as I promised, what is this surgery thing about?"}, {"start": 172.04000000000002, "end": 176.96, "text": " You see, the training of the neural network part of this algorithm took no less than 10"}, {"start": 176.96, "end": 177.96, "text": " months."}, {"start": 177.96, "end": 182.72, "text": " Now, just imagine forgetting to feed an important piece of information into the system"}, {"start": 182.72, "end": 186.08, "text": " or finding a bug while training is underway."}, {"start": 186.08, "end": 191.04000000000002, "text": " In cases like this, normally we would have to abort the training and start again."}, {"start": 191.04000000000002, "end": 196.20000000000002, "text": " If we have a new idea as to how to improve the system, again we have to abort the training"}, {"start": 196.20000000000002, "end": 197.60000000000002, "text": " and start again."}, {"start": 197.6, "end": 203.07999999999998, "text": " If a new version of Dota 2 comes out with some changes, you guys try it, we start again."}, {"start": 203.07999999999998, "end": 207.6, "text": " This would be okay if the training took from the order of minutes to hours, but we are"}, {"start": 207.6, "end": 210.07999999999998, "text": " talking 10 months here."}, {"start": 210.07999999999998, "end": 212.04, "text": " This is clearly not practical."}, {"start": 212.04, "end": 216.92, "text": " So, what if there would be a technique that would be able to apply all of these changes"}, {"start": 216.92, "end": 219.88, "text": " to a training process that is already underway?"}, {"start": 219.88, "end": 223.56, "text": " Well, this is what the surgery technique is about."}, {"start": 223.56, "end": 228.36, "text": " Here with the blue curve, you see the agents keyerating improving over time and the red"}, {"start": 228.36, "end": 232.88, "text": " lines with the black triangles show us the dates for the surgeries."}, {"start": 232.88, "end": 237.4, "text": " The author's note that over the 10 month training process, they have performed approximately"}, {"start": 237.4, "end": 240.04, "text": " one surgery per two weeks."}, {"start": 240.04, "end": 245.32, "text": " It seems that getting a doctorate in machine learning research is getting a whole new meaning."}, {"start": 245.32, "end": 251.52, "text": " Some of them indeed made an immediate difference while others seemingly not so much."}, {"start": 251.52, "end": 255.52, "text": " So how do we assess how potent these surgeries were?"}, {"start": 255.52, "end": 257.88, "text": " Did they give the agent superpowers?"}, {"start": 257.88, "end": 263.44, "text": " Well, have a look at the rerun part here, which is the final Frankenstein's monster agent"}, {"start": 263.44, "end": 268.36, "text": " containing the result of all the surgeries retrained from scratch."}, {"start": 268.36, "end": 273.32, "text": " And just look at how quickly it is trained and not only that, but it shoots even higher"}, {"start": 273.32, "end": 275.72, "text": " than the original agent."}, {"start": 275.72, "end": 277.04, "text": " Absolute madness."}, {"start": 277.04, "end": 282.36, "text": " Apparently, open AI is employing some proper surgeons over there at their lab."}, {"start": 282.36, "end": 284.08000000000004, "text": " I love it."}, {"start": 284.08000000000004, "end": 288.48, "text": " Interestingly, this is not the only time I've seen the word surgery used in the computer"}, {"start": 288.48, "end": 291.20000000000005, "text": " sciences outside of medicine."}, {"start": 291.20000000000005, "end": 296.88, "text": " A legendary mathematician named Gregory Perelman, who proved the Poincare conjecture also"}, {"start": 296.88, "end": 301.04, "text": " performed a mathematical technique that he called surgery."}, {"start": 301.04, "end": 306.24, "text": " What's more, we even talked about simulating weightlifting and how a simulated AI agent"}, {"start": 306.24, "end": 312.2, "text": " will walk after getting hamstrung and, you guessed it right, undergoing surgery to fix"}, {"start": 312.2, "end": 313.2, "text": " it."}, {"start": 313.2, "end": 314.88, "text": " What a time to be alive."}, {"start": 314.88, "end": 320.16, "text": " And again, an important lesson is that in this project, open AI is not spending so much"}, {"start": 320.16, "end": 323.84000000000003, "text": " money and resources just to play video games."}, {"start": 323.84000000000003, "end": 330.40000000000003, "text": " Dota 2 is a wonderful test bed to see how their AI compares to humans at complex tasks"}, {"start": 330.40000000000003, "end": 333.0, "text": " that involve strategy and teamwork."}, {"start": 333.0, "end": 339.32, "text": " However, the ultimate goal is to reuse parts of this system for other complex problems outside"}, {"start": 339.32, "end": 340.76, "text": " of video games."}, {"start": 340.76, "end": 350.36, "text": " For instance, the algorithm that you've seen here today can also do this."}, {"start": 350.36, "end": 352.96, "text": " This episode has been supported by Linode."}, {"start": 352.96, "end": 356.68, "text": " Linode is the world's largest independent cloud computing provider."}, {"start": 356.68, "end": 362.08, "text": " Unlike entry-level hosting services, Linode gives you full back-end access to your server,"}, {"start": 362.08, "end": 367.32, "text": " which is your step up to powerful, fast, fully configurable cloud computing."}, {"start": 367.32, "end": 372.56, "text": " Linode also has one click apps that streamline your ability to deploy websites, personal"}, {"start": 372.56, "end": 375.64, "text": " VPNs, game servers, and more."}, {"start": 375.64, "end": 380.64, "text": " If you need something as small as a personal online portfolio, Linode has your back, and"}, {"start": 380.64, "end": 386.28, "text": " if you need to manage tons of clients' websites and reliably serve them to millions of visitors,"}, {"start": 386.28, "end": 388.03999999999996, "text": " Linode can do that too."}, {"start": 388.04, "end": 395.04, "text": " What's more, they offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made"}, {"start": 395.04, "end": 399.44, "text": " for AI, scientific computing, and computer graphics projects."}, {"start": 399.44, "end": 404.48, "text": " If only I had access to a tool like this while I was working on my PhD studies."}, {"start": 404.48, "end": 411.04, "text": " To receive $20 credit in your new Linode account, visit linode.com slash papers, or just"}, {"start": 411.04, "end": 414.28000000000003, "text": " click the link in the video description and give it a try today."}, {"start": 414.28, "end": 423.15999999999997, "text": " Thanks for your support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=EjVzjxihGvU
This Neural Network Restores Old Videos
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Their blog post on training neural networks is available here: https://www.wandb.com/articles/fundamentals-of-neural-networks 📝 The paper "DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement" is available here: http://iizuka.cs.tsukuba.ac.jp/projects/remastering/en/index.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Jolene Fahir. In this series, we often discuss a class of techniques by the name Image Impainting. Image impainting methods are capable of filling in missing details from a mostly intact image. You see the legendary patch match algorithm at work here, which is more than 10 years old and it is a good old computer graphics method with no machine learning insight and after so much time, 10 years is an eternity in research years, it still punches way above its weight. However, with the ascendancy of neural network based learning methods, I am often wondering whether it would be possible to take a more difficult problem, for instance, impainting not just images, but movies as well. For instance, let's take an old, old black and white movie that suffers from missing data, flickering, blurriness and interestingly, even the contrast of the footage has changed as it faded over time. Well, hold onto your papers because this learning based approach fixes all of these and even more. Step number one is restoration, which takes care of all of these artifacts and contrast issues. You can not only see how much better the restored version is, but it is also reported what the technique did exactly. However, it does more. What more could be possibly asked for? Well, colorization. What it does is that it looks at only six colorized reference images that we have to provide and uses this as our direction and propagated to the remainder of the frames and it does an absolutely amazing work at that. It even tells us which reference image it is looking at when colorizing some of these frames, so if something does not come out favorably, we know which image to recolor. The architecture of the neural network that is used for all this also has to follow the requirements appropriately. For instance, beyond the standard spatial convolution layers, it also makes ample use of temporal convolution layers, which helps smearing out the colorization information from one reference image to multiple frames. However, in research, a technique is rarely the very first at doing something and sure enough. This is not the first technique that does this kind of restoration and colorization. So, how does it compare to previously published methods? Well, quite favorably. With previous methods, in some cases, the colorization just appears and disappears over time while it is much more stable here. Also, fewer artifacts make it to the final footage and since cleaning these up is one of the main objectives of these methods, that's also great news. If we look at some quantitative results or in other words numbers that describe the difference, you can see here that we get a 3-4 decibels cleaner image, which is outstanding. Note that the decibel scale is not linear, but a logarithmic scale, therefore if you read 28 instead of 24, it does not mean that it is just approximately 15% better. It is a much, much more pronounced difference than that. I think these results are approaching a state where they are becoming close to good enough so that we can revive some of these old masterpiece movies and give them a much deserved facelift. What a time to be alive! This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI to your research, Stanford and Berkeley. They also wrote a guide on the fundamentals of neural networks where they explain in simple terms how to train a neural network properly, what are the most common errors you can make, and how to fix them. It is really great you got to have a look. So make sure to visit them through wendeeb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Jolene Fahir."}, {"start": 4.32, "end": 9.72, "text": " In this series, we often discuss a class of techniques by the name Image Impainting."}, {"start": 9.72, "end": 14.48, "text": " Image impainting methods are capable of filling in missing details from a mostly intact"}, {"start": 14.48, "end": 15.48, "text": " image."}, {"start": 15.48, "end": 20.36, "text": " You see the legendary patch match algorithm at work here, which is more than 10 years old"}, {"start": 20.36, "end": 25.32, "text": " and it is a good old computer graphics method with no machine learning insight and after"}, {"start": 25.32, "end": 32.2, "text": " so much time, 10 years is an eternity in research years, it still punches way above its weight."}, {"start": 32.2, "end": 37.08, "text": " However, with the ascendancy of neural network based learning methods, I am often wondering"}, {"start": 37.08, "end": 42.32, "text": " whether it would be possible to take a more difficult problem, for instance, impainting"}, {"start": 42.32, "end": 45.6, "text": " not just images, but movies as well."}, {"start": 45.6, "end": 50.2, "text": " For instance, let's take an old, old black and white movie that suffers from missing"}, {"start": 50.2, "end": 57.120000000000005, "text": " data, flickering, blurriness and interestingly, even the contrast of the footage has changed"}, {"start": 57.120000000000005, "end": 58.92, "text": " as it faded over time."}, {"start": 58.92, "end": 64.28, "text": " Well, hold onto your papers because this learning based approach fixes all of these and"}, {"start": 64.28, "end": 65.76, "text": " even more."}, {"start": 65.76, "end": 70.36, "text": " Step number one is restoration, which takes care of all of these artifacts and contrast"}, {"start": 70.36, "end": 71.44, "text": " issues."}, {"start": 71.44, "end": 76.16, "text": " You can not only see how much better the restored version is, but it is also reported"}, {"start": 76.16, "end": 78.52000000000001, "text": " what the technique did exactly."}, {"start": 78.52, "end": 81.2, "text": " However, it does more."}, {"start": 81.2, "end": 83.47999999999999, "text": " What more could be possibly asked for?"}, {"start": 83.47999999999999, "end": 85.6, "text": " Well, colorization."}, {"start": 85.6, "end": 90.56, "text": " What it does is that it looks at only six colorized reference images that we have to provide"}, {"start": 90.56, "end": 96.0, "text": " and uses this as our direction and propagated to the remainder of the frames and it does"}, {"start": 96.0, "end": 98.67999999999999, "text": " an absolutely amazing work at that."}, {"start": 98.67999999999999, "end": 103.16, "text": " It even tells us which reference image it is looking at when colorizing some of these"}, {"start": 103.16, "end": 109.52, "text": " frames, so if something does not come out favorably, we know which image to recolor."}, {"start": 109.52, "end": 113.88, "text": " The architecture of the neural network that is used for all this also has to follow the"}, {"start": 113.88, "end": 115.92, "text": " requirements appropriately."}, {"start": 115.92, "end": 120.8, "text": " For instance, beyond the standard spatial convolution layers, it also makes ample use of"}, {"start": 120.8, "end": 126.47999999999999, "text": " temporal convolution layers, which helps smearing out the colorization information from one"}, {"start": 126.47999999999999, "end": 129.12, "text": " reference image to multiple frames."}, {"start": 129.12, "end": 135.48000000000002, "text": " However, in research, a technique is rarely the very first at doing something and sure enough."}, {"start": 135.48000000000002, "end": 140.08, "text": " This is not the first technique that does this kind of restoration and colorization."}, {"start": 140.08, "end": 144.08, "text": " So, how does it compare to previously published methods?"}, {"start": 144.08, "end": 146.08, "text": " Well, quite favorably."}, {"start": 146.08, "end": 152.6, "text": " With previous methods, in some cases, the colorization just appears and disappears over time while"}, {"start": 152.6, "end": 154.6, "text": " it is much more stable here."}, {"start": 154.6, "end": 159.76, "text": " Also, fewer artifacts make it to the final footage and since cleaning these up is one of the"}, {"start": 159.76, "end": 163.56, "text": " main objectives of these methods, that's also great news."}, {"start": 163.56, "end": 169.2, "text": " If we look at some quantitative results or in other words numbers that describe the difference,"}, {"start": 169.2, "end": 175.04, "text": " you can see here that we get a 3-4 decibels cleaner image, which is outstanding."}, {"start": 175.04, "end": 180.76, "text": " Note that the decibel scale is not linear, but a logarithmic scale, therefore if you read"}, {"start": 180.76, "end": 186.76, "text": " 28 instead of 24, it does not mean that it is just approximately 15% better."}, {"start": 186.76, "end": 189.95999999999998, "text": " It is a much, much more pronounced difference than that."}, {"start": 189.95999999999998, "end": 194.6, "text": " I think these results are approaching a state where they are becoming close to good enough"}, {"start": 194.6, "end": 199.72, "text": " so that we can revive some of these old masterpiece movies and give them a much deserved"}, {"start": 199.72, "end": 201.07999999999998, "text": " facelift."}, {"start": 201.07999999999998, "end": 202.76, "text": " What a time to be alive!"}, {"start": 202.76, "end": 205.95999999999998, "text": " This episode has been supported by weights and biases."}, {"start": 205.95999999999998, "end": 210.44, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 210.44, "end": 215.44, "text": " It can save you a ton of time and money in these projects and is being used by OpenAI"}, {"start": 215.44, "end": 218.32, "text": " to your research, Stanford and Berkeley."}, {"start": 218.32, "end": 223.12, "text": " They also wrote a guide on the fundamentals of neural networks where they explain in simple"}, {"start": 223.12, "end": 228.92, "text": " terms how to train a neural network properly, what are the most common errors you can make,"}, {"start": 228.92, "end": 230.52, "text": " and how to fix them."}, {"start": 230.52, "end": 233.36, "text": " It is really great you got to have a look."}, {"start": 233.36, "end": 238.56, "text": " So make sure to visit them through wendeeb.com slash papers or just click the link in the"}, {"start": 238.56, "end": 241.92000000000002, "text": " video description and you can get a free demo today."}, {"start": 241.92000000000002, "end": 245.68, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 245.68, "end": 275.24, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9IqRdEs4_JU
Simulating Breaking Bread 🍞
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "CD-MPM: Continuum Damage Material Point Methods for Dynamic Fracture Animation" and its source code is available here: - https://www.seas.upenn.edu/~cffjiang/research/wolper2019fracture/wolper2019fracture.pdf - https://github.com/squarefk/ziran2019 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zornaifahid. In a recent video, we showcased a computer graphics technique that simulated the process of baking, and now it's time to discuss a paper that is about simulating how we can tear this loaf of bread apart. This paper aligns well with the favorite pastimes of a computer graphics researcher, which is, of course, destroying virtual objects in a spectacular fashion. Like the previous work, this new paper also builds on top of the material point method, a hybrid simulation technique that uses both particles and grids to create these beautiful animations. However, it traditionally does not support simulating cracking and tearing phenomena. Now, have a look at this new work and marvel at how beautifully this phenomenon is simulated here. With this, we can smash Oreos, candy crabs, pumpkins, and much, much more. This jelly fracture scene is my absolute favorite. Now, when an artist works with these simulations, the issue of artistic control often comes up. After all, this method is meant to compute this phenomena by simulating physics and we can just instruct physics to be more beautiful. Or can we? Well, this technique offers us plenty of parameters to tune the simulation to our liking, two that will note today are the alpha, which means the hardening and beta is the cohesion parameter. So what does that mean exactly? Well, beta was cohesion, which is the force that holds matter together. So as we go to the right, the objects stay more intact and as we go down, the objects shatter into more and more pieces. The method offers us more parameters than these, but even with these two, we can really make the kind of simulation we are looking for. Huh, what the heck? Let's do two more. We can even control the way the cracks form with the MC parameter, which is the speed of crack propagation. And G is the energy release, which, as we look to the right, increases the object's resistance to damage. So how long does this take? Well, the technique takes its sweet time. The execution timings range from 17 seconds to about 10 minutes per frame. This is one of those methods that does something that wasn't possible before, and it is about doing things correctly. And after a paper appears on something that makes the impossible possible, follow-up research works get published later that further refine and optimize it. So as we say, two more papers down the line, and this will run much faster. Now, a word about the first author of the paper, Joshua Wopper. Strictly speaking, it is his third paper, but only the second within computer graphics, and my goodness, did he come back with guns blazing. This paper was accepted to the C-Graph conference, which is one of the biggest honors a computer graphics researcher can get, perhaps equivalent to the Olympic gold medal for an athlete. It definitely is worthy of a gold medal. Make sure to have a look at the paper in the video description. It is an absolutely beautifully crafted piece of work. Congratulations, Joshua. This episode has been supported by Lambda. If you're a researcher, where I start up, looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Asia. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zornaifahid."}, {"start": 4.32, "end": 9.44, "text": " In a recent video, we showcased a computer graphics technique that simulated the process"}, {"start": 9.44, "end": 15.36, "text": " of baking, and now it's time to discuss a paper that is about simulating how we can"}, {"start": 15.36, "end": 17.96, "text": " tear this loaf of bread apart."}, {"start": 17.96, "end": 23.52, "text": " This paper aligns well with the favorite pastimes of a computer graphics researcher, which is,"}, {"start": 23.52, "end": 28.04, "text": " of course, destroying virtual objects in a spectacular fashion."}, {"start": 28.04, "end": 32.96, "text": " Like the previous work, this new paper also builds on top of the material point method,"}, {"start": 32.96, "end": 38.42, "text": " a hybrid simulation technique that uses both particles and grids to create these beautiful"}, {"start": 38.42, "end": 39.42, "text": " animations."}, {"start": 39.42, "end": 45.0, "text": " However, it traditionally does not support simulating cracking and tearing phenomena."}, {"start": 45.0, "end": 50.64, "text": " Now, have a look at this new work and marvel at how beautifully this phenomenon is simulated"}, {"start": 50.64, "end": 51.64, "text": " here."}, {"start": 51.64, "end": 59.0, "text": " With this, we can smash Oreos, candy crabs, pumpkins, and much, much more."}, {"start": 59.0, "end": 62.28, "text": " This jelly fracture scene is my absolute favorite."}, {"start": 62.28, "end": 68.84, "text": " Now, when an artist works with these simulations, the issue of artistic control often comes up."}, {"start": 68.84, "end": 74.04, "text": " After all, this method is meant to compute this phenomena by simulating physics and we"}, {"start": 74.04, "end": 77.44, "text": " can just instruct physics to be more beautiful."}, {"start": 77.44, "end": 78.92, "text": " Or can we?"}, {"start": 78.92, "end": 84.48, "text": " Well, this technique offers us plenty of parameters to tune the simulation to our liking, two that"}, {"start": 84.48, "end": 91.0, "text": " will note today are the alpha, which means the hardening and beta is the cohesion parameter."}, {"start": 91.0, "end": 93.28, "text": " So what does that mean exactly?"}, {"start": 93.28, "end": 98.12, "text": " Well, beta was cohesion, which is the force that holds matter together."}, {"start": 98.12, "end": 103.8, "text": " So as we go to the right, the objects stay more intact and as we go down, the objects"}, {"start": 103.8, "end": 106.4, "text": " shatter into more and more pieces."}, {"start": 106.4, "end": 111.04, "text": " The method offers us more parameters than these, but even with these two, we can really"}, {"start": 111.04, "end": 113.72, "text": " make the kind of simulation we are looking for."}, {"start": 113.72, "end": 115.4, "text": " Huh, what the heck?"}, {"start": 115.4, "end": 116.60000000000001, "text": " Let's do two more."}, {"start": 116.60000000000001, "end": 121.92, "text": " We can even control the way the cracks form with the MC parameter, which is the speed of crack"}, {"start": 121.92, "end": 127.68, "text": " propagation."}, {"start": 127.68, "end": 133.24, "text": " And G is the energy release, which, as we look to the right, increases the object's resistance"}, {"start": 133.24, "end": 134.88, "text": " to damage."}, {"start": 134.88, "end": 136.88, "text": " So how long does this take?"}, {"start": 136.88, "end": 139.6, "text": " Well, the technique takes its sweet time."}, {"start": 139.6, "end": 144.68, "text": " The execution timings range from 17 seconds to about 10 minutes per frame."}, {"start": 144.68, "end": 149.76, "text": " This is one of those methods that does something that wasn't possible before, and it is about"}, {"start": 149.76, "end": 151.51999999999998, "text": " doing things correctly."}, {"start": 151.51999999999998, "end": 156.88, "text": " And after a paper appears on something that makes the impossible possible, follow-up research"}, {"start": 156.88, "end": 161.6, "text": " works get published later that further refine and optimize it."}, {"start": 161.6, "end": 166.28, "text": " So as we say, two more papers down the line, and this will run much faster."}, {"start": 166.28, "end": 170.64, "text": " Now, a word about the first author of the paper, Joshua Wopper."}, {"start": 170.64, "end": 175.88, "text": " Strictly speaking, it is his third paper, but only the second within computer graphics,"}, {"start": 175.88, "end": 179.48, "text": " and my goodness, did he come back with guns blazing."}, {"start": 179.48, "end": 183.92, "text": " This paper was accepted to the C-Graph conference, which is one of the biggest honors a computer"}, {"start": 183.92, "end": 189.72, "text": " graphics researcher can get, perhaps equivalent to the Olympic gold medal for an athlete."}, {"start": 189.72, "end": 192.48, "text": " It definitely is worthy of a gold medal."}, {"start": 192.48, "end": 195.08, "text": " Make sure to have a look at the paper in the video description."}, {"start": 195.08, "end": 198.48, "text": " It is an absolutely beautifully crafted piece of work."}, {"start": 198.48, "end": 200.24, "text": " Congratulations, Joshua."}, {"start": 200.24, "end": 202.68, "text": " This episode has been supported by Lambda."}, {"start": 202.68, "end": 206.88, "text": " If you're a researcher, where I start up, looking for cheap GPU compute to run these"}, {"start": 206.88, "end": 210.0, "text": " algorithms, check out Lambda GPU Cloud."}, {"start": 210.0, "end": 214.56, "text": " I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you"}, {"start": 214.56, "end": 217.72, "text": " that they are offering GPU Cloud services as well."}, {"start": 217.72, "end": 224.68, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 224.68, "end": 229.72, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 229.72, "end": 235.12, "text": " And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 235.12, "end": 237.32, "text": " AWS and Asia."}, {"start": 237.32, "end": 242.4, "text": " Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU"}, {"start": 242.4, "end": 243.8, "text": " instances today."}, {"start": 243.8, "end": 247.52, "text": " Our thanks to Lambda for helping us make better videos for you."}, {"start": 247.52, "end": 251.16, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=SWoravHhsUU
StyleGAN2: Near-Perfect Human Face Synthesis...and More
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Their blog post on street scene segmentation is available here: https://app.wandb.ai/borisd13/semantic-segmentation/reports/Semantic-Segmentation-on-Street-Scenes--VmlldzoxMDk2OA 📝 The paper "Analyzing and Improving the Image Quality of #StyleGAN" and its source code is available here: - http://arxiv.org/abs/1912.04958 - https://github.com/NVlabs/stylegan2 You can try it here: - https://colab.research.google.com/drive/1ShgW6wohEFQtqs_znMna3dzrcVoABKIH#scrollTo=4_s8h-ilzHQc 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #StyleGAN2
Dear Fellow Scholars, this is two-minute papers with Karo Zornaifahir, Muran Network-based Learning Algorithms, or on their eyes these days, and even though it is common knowledge that they are capable of image classification, or, in other words, looking at an image and saying whether it depicts a dog or a cat, nowadays, they can do much, much more. In this series, we covered a stunning paper that showcased the system that could not only classify an image, but write a proper sentence on what is going on and could cover even highly non-trivial cases. You may be surprised, but this thing is not recent at all. This is four-year-old news. In sanity. Later, researchers turned this whole problem around and performed something that was previously thought to be impossible. They started using these networks to generate photorealistic images from a written text description. We could create new bird species by specifying that it should have orange legs and a short yellow bill. Later, researchers at NVIDIA recognized and addressed two shortcomings. One was that the images were not that detailed, and two, even though we could input text, we couldn't exert too much artistic control over the results. In came style again to the rescue, which was able to perform both of these difficult tasks really well. These images were progressively grown, which means that we started out with a course image and go over it, over and over again, adding new details. This is what the results look like, and we can marvel at the fact that none of these people are real. However, some of these images were still contaminated by unwanted artifacts. Furthermore, there are some features that are highly localized, as we exert control over these images. You can see how this part of the teeth and eyes are pinned to a particular position, and the algorithm just refuses to let it go, sometimes to the detriment of its surroundings. This new work is titled Style again 2, and it addresses all of these problems in one go. Perhaps this is the only place on the internet where we can say that finally teeth and eyes are now allowed to float around freely and mean it with a positive sentiment. Here you see a few hand-picked examples from the best ones, and I have to say these are eye-poppingly detailed and correct looking images. My goodness! The mixing examples you see here are also outstanding, way better than the previous version. Also, note that as there are plenty of training images out there, for many other things beyond human faces, it can also generate cars, churches, horses, and of course, cats. Now that the original Style again 1 work has been out for a while, we have a little more clarity and understanding as to how it does what it does, and the redundant parts of the architecture have been revised and simplified. This clarity comes with additional advantages beyond faster and higher quality training and image generation. For instance, interestingly, despite the fact that the quality has improved significantly, images made with the new method can be detected more easily. Note that the paper does much, much more than this, so make sure to have a look in the video description. In this series, we always say that two more papers down the line and this technique will be leaps and bounds beyond the first iteration. Well, here we are, not two, but only one more paper down the line. What a time to be alive! The source code of this project is also available. What's more, it even runs in your browser. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by open AI to your research, Stanford and Berkeley. Here you see a beautiful final report on one of their projects on classifying parts of street images and see how these learning algorithms evolve over time. Make sure to visit them through wendb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zornaifahir,"}, {"start": 4.0, "end": 8.0, "text": " Muran Network-based Learning Algorithms, or on their eyes these days,"}, {"start": 8.0, "end": 13.0, "text": " and even though it is common knowledge that they are capable of image classification,"}, {"start": 13.0, "end": 19.0, "text": " or, in other words, looking at an image and saying whether it depicts a dog or a cat,"}, {"start": 19.0, "end": 22.0, "text": " nowadays, they can do much, much more."}, {"start": 22.0, "end": 26.0, "text": " In this series, we covered a stunning paper that showcased the system"}, {"start": 26.0, "end": 32.0, "text": " that could not only classify an image, but write a proper sentence on what is going on"}, {"start": 32.0, "end": 36.0, "text": " and could cover even highly non-trivial cases."}, {"start": 36.0, "end": 39.0, "text": " You may be surprised, but this thing is not recent at all."}, {"start": 39.0, "end": 43.0, "text": " This is four-year-old news. In sanity."}, {"start": 43.0, "end": 47.0, "text": " Later, researchers turned this whole problem around and performed something"}, {"start": 47.0, "end": 50.0, "text": " that was previously thought to be impossible."}, {"start": 50.0, "end": 54.0, "text": " They started using these networks to generate photorealistic images"}, {"start": 54.0, "end": 58.0, "text": " from a written text description. We could create new bird species"}, {"start": 58.0, "end": 63.0, "text": " by specifying that it should have orange legs and a short yellow bill."}, {"start": 63.0, "end": 68.0, "text": " Later, researchers at NVIDIA recognized and addressed two shortcomings."}, {"start": 68.0, "end": 72.0, "text": " One was that the images were not that detailed, and two,"}, {"start": 72.0, "end": 78.0, "text": " even though we could input text, we couldn't exert too much artistic control over the results."}, {"start": 78.0, "end": 85.0, "text": " In came style again to the rescue, which was able to perform both of these difficult tasks really well."}, {"start": 85.0, "end": 90.0, "text": " These images were progressively grown, which means that we started out with a course image"}, {"start": 90.0, "end": 94.0, "text": " and go over it, over and over again, adding new details."}, {"start": 94.0, "end": 100.0, "text": " This is what the results look like, and we can marvel at the fact that none of these people are real."}, {"start": 100.0, "end": 105.0, "text": " However, some of these images were still contaminated by unwanted artifacts."}, {"start": 105.0, "end": 111.0, "text": " Furthermore, there are some features that are highly localized, as we exert control over these images."}, {"start": 111.0, "end": 116.0, "text": " You can see how this part of the teeth and eyes are pinned to a particular position,"}, {"start": 116.0, "end": 122.0, "text": " and the algorithm just refuses to let it go, sometimes to the detriment of its surroundings."}, {"start": 122.0, "end": 129.0, "text": " This new work is titled Style again 2, and it addresses all of these problems in one go."}, {"start": 129.0, "end": 135.0, "text": " Perhaps this is the only place on the internet where we can say that finally teeth and eyes"}, {"start": 135.0, "end": 140.0, "text": " are now allowed to float around freely and mean it with a positive sentiment."}, {"start": 140.0, "end": 151.0, "text": " Here you see a few hand-picked examples from the best ones, and I have to say these are eye-poppingly detailed and correct looking images."}, {"start": 151.0, "end": 158.0, "text": " My goodness! The mixing examples you see here are also outstanding, way better than the previous version."}, {"start": 158.0, "end": 165.0, "text": " Also, note that as there are plenty of training images out there, for many other things beyond human faces,"}, {"start": 165.0, "end": 172.0, "text": " it can also generate cars, churches, horses, and of course, cats."}, {"start": 172.0, "end": 176.0, "text": " Now that the original Style again 1 work has been out for a while,"}, {"start": 176.0, "end": 181.0, "text": " we have a little more clarity and understanding as to how it does what it does,"}, {"start": 181.0, "end": 185.0, "text": " and the redundant parts of the architecture have been revised and simplified."}, {"start": 185.0, "end": 192.0, "text": " This clarity comes with additional advantages beyond faster and higher quality training and image generation."}, {"start": 192.0, "end": 198.0, "text": " For instance, interestingly, despite the fact that the quality has improved significantly,"}, {"start": 198.0, "end": 202.0, "text": " images made with the new method can be detected more easily."}, {"start": 202.0, "end": 207.0, "text": " Note that the paper does much, much more than this, so make sure to have a look in the video description."}, {"start": 207.0, "end": 214.0, "text": " In this series, we always say that two more papers down the line and this technique will be leaps and bounds beyond the first iteration."}, {"start": 214.0, "end": 219.0, "text": " Well, here we are, not two, but only one more paper down the line."}, {"start": 219.0, "end": 221.0, "text": " What a time to be alive!"}, {"start": 221.0, "end": 224.0, "text": " The source code of this project is also available."}, {"start": 224.0, "end": 227.0, "text": " What's more, it even runs in your browser."}, {"start": 227.0, "end": 230.0, "text": " This episode has been supported by weights and biases."}, {"start": 230.0, "end": 235.0, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 235.0, "end": 242.0, "text": " It can save you a ton of time and money in these projects and is being used by open AI to your research,"}, {"start": 242.0, "end": 244.0, "text": " Stanford and Berkeley."}, {"start": 244.0, "end": 250.0, "text": " Here you see a beautiful final report on one of their projects on classifying parts of street images"}, {"start": 250.0, "end": 254.0, "text": " and see how these learning algorithms evolve over time."}, {"start": 254.0, "end": 260.0, "text": " Make sure to visit them through wendb.com slash papers or just click the link in the video description"}, {"start": 260.0, "end": 262.0, "text": " and you can get a free demo today."}, {"start": 262.0, "end": 266.0, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 266.0, "end": 273.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=T7w7QuYa4SQ
Finally, Differentiable Physics is Here!
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Their instrumentation for this paper is available here: https://app.wandb.ai/lavanyashukla/difftaichi 📝 The paper "DiffTaichi: Differentiable Programming for Physical Simulation" is available here: - https://arxiv.org/abs/1910.00935 - https://github.com/yuanming-hu/difftaichi My thesis on fluid control (with source code) is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/fluid_control_msc_thesis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-407081/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fahir, a few episodes ago we discuss a new research work that performs something that they call differentiable rendering. The problem formulation is the following. To specify a target image that is either rendered by a computer program or even better a photo. The input is a pitiful approximation of it and now because it progressively changed the input materials, textures and even the geometry of this input in a 3D modelar system it is able to match this photo. At the end of the video I noted that I am really looking forward for more differentiable rendering and differentiable everything papers. So fortunately here we go. This new paper introduces differentiable programming for physical simulations. So what does that mean exactly? Let's look at a few examples and find out together. Imagine that we have this billiard game where we would like to hit the wide ball with just the right amount of force and from the right direction such that the blue ball ends up close to the black spot. Let's try it. Well, this example shows that this doesn't happen by chance and we have to engage in a fair amount of trial and error to make this happen. What this differentiable programming system does for us is that we can specify an end state which is the blue ball on the black dot and it is able to compute the required forces and angles to make this happen. Very close. But the key point here is that this system is general and therefore can be applied to many many more problems. We'll have a look at a few that are much more challenging than this example. For instance, it can also teach this GUI object to actuate itself in a way so that it would start to work properly within only two minutes. The 3D version of this simulation learned so robustly so that it can even withstand a few extra particles in the way. The next example is going to be obscenely powerful. I tried to explain what this is to make sure that we can properly appreciate it. Many years ago I was trying to solve a problem called fluid control where we would try to coerce a smoke plume or a piece of fluid to take a given shape like a bunny or a logo with letters. You can see some footage of this project here. The key difficulty of this problem is that this is not what typically happens in reality. Of course, a glass of spilled water is very unlikely to suddenly take the shape of a human face so we have to introduce changes to the simulation itself but at the same time it still has to look as if it could happen in nature. If you wish to know more about my work here, the full thesis and the source code is available in the video description and one of my kind students has even implemented it in Blunder. So this problem is obscenely difficult. And you can now guess what's next for this differentiable technique, fluid control. It starts out with a piece of simulated ink with a checkerboard pattern and it exerts just the appropriate forces so that it forms exactly the Yin Yang symbol shortly after. I am shocked by how such a general system can perform something of this complexity. Having worked on this problem for a while, I can tell you that this is immensely difficult. Amazing. And hold on to your papers because it can do even more. In this example, it adds carefully crafted ripples to the water to make sure that it ends up in a state that distorts the image of the squirrel in a way that a powerful and well-known neural network sees it not as a squirrel, but as a goldfish. This thing is basically a victory lap in the paper. It is so powerful, it's not even funny. You can just make up some problems that sound completely impossible and it rips right through them. The full source code of this work is also available. By the way, the first author of this paper is Yuan Ming-Hu. His work was showcased several times in this series. He talked about his amazing yellow simulation that was implemented in so few lines of code it almost fits on a business card. I said it in a previous episode and I will say it again. I can't wait to see more and more papers in differentiable rendering and simulations. And as this work leaves plenty of room for creativity for novel problem definitions, I'd love to hear what you think about it. What else could this be used for? In video games faster than other learning based techniques, anything else, let me know in the comments below. What a time to be alive. This episode has been supported by weights and biases. Weight and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. It is really easy to set up so much so that they have made an instrumentation for this exact paper we have talked about in this episode. Have a look here. Make sure to visit them through whendb.com slash papers, www.wndb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fahir, a few episodes ago"}, {"start": 5.5600000000000005, "end": 11.76, "text": " we discuss a new research work that performs something that they call differentiable rendering."}, {"start": 11.76, "end": 13.96, "text": " The problem formulation is the following."}, {"start": 13.96, "end": 20.64, "text": " To specify a target image that is either rendered by a computer program or even better a photo."}, {"start": 20.64, "end": 26.0, "text": " The input is a pitiful approximation of it and now because it progressively changed the"}, {"start": 26.0, "end": 32.2, "text": " input materials, textures and even the geometry of this input in a 3D modelar system it is"}, {"start": 32.2, "end": 34.480000000000004, "text": " able to match this photo."}, {"start": 34.480000000000004, "end": 38.76, "text": " At the end of the video I noted that I am really looking forward for more differentiable"}, {"start": 38.76, "end": 42.2, "text": " rendering and differentiable everything papers."}, {"start": 42.2, "end": 44.760000000000005, "text": " So fortunately here we go."}, {"start": 44.760000000000005, "end": 49.879999999999995, "text": " This new paper introduces differentiable programming for physical simulations."}, {"start": 49.879999999999995, "end": 52.16, "text": " So what does that mean exactly?"}, {"start": 52.16, "end": 55.2, "text": " Let's look at a few examples and find out together."}, {"start": 55.2, "end": 59.760000000000005, "text": " Imagine that we have this billiard game where we would like to hit the wide ball with just"}, {"start": 59.760000000000005, "end": 64.9, "text": " the right amount of force and from the right direction such that the blue ball ends up"}, {"start": 64.9, "end": 66.92, "text": " close to the black spot."}, {"start": 66.92, "end": 67.92, "text": " Let's try it."}, {"start": 67.92, "end": 72.72, "text": " Well, this example shows that this doesn't happen by chance and we have to engage in"}, {"start": 72.72, "end": 75.88, "text": " a fair amount of trial and error to make this happen."}, {"start": 75.88, "end": 81.24000000000001, "text": " What this differentiable programming system does for us is that we can specify an end state"}, {"start": 81.24, "end": 86.03999999999999, "text": " which is the blue ball on the black dot and it is able to compute the required forces"}, {"start": 86.03999999999999, "end": 88.83999999999999, "text": " and angles to make this happen."}, {"start": 88.83999999999999, "end": 90.0, "text": " Very close."}, {"start": 90.0, "end": 95.11999999999999, "text": " But the key point here is that this system is general and therefore can be applied to"}, {"start": 95.11999999999999, "end": 96.44, "text": " many many more problems."}, {"start": 96.44, "end": 100.91999999999999, "text": " We'll have a look at a few that are much more challenging than this example."}, {"start": 100.91999999999999, "end": 106.6, "text": " For instance, it can also teach this GUI object to actuate itself in a way so that it would"}, {"start": 106.6, "end": 110.19999999999999, "text": " start to work properly within only two minutes."}, {"start": 110.2, "end": 115.16, "text": " The 3D version of this simulation learned so robustly so that it can even withstand"}, {"start": 115.16, "end": 119.32000000000001, "text": " a few extra particles in the way."}, {"start": 119.32000000000001, "end": 122.36, "text": " The next example is going to be obscenely powerful."}, {"start": 122.36, "end": 127.24000000000001, "text": " I tried to explain what this is to make sure that we can properly appreciate it."}, {"start": 127.24000000000001, "end": 131.56, "text": " Many years ago I was trying to solve a problem called fluid control where we would try to"}, {"start": 131.56, "end": 137.84, "text": " coerce a smoke plume or a piece of fluid to take a given shape like a bunny or a logo"}, {"start": 137.84, "end": 138.96, "text": " with letters."}, {"start": 138.96, "end": 141.6, "text": " You can see some footage of this project here."}, {"start": 141.6, "end": 146.72, "text": " The key difficulty of this problem is that this is not what typically happens in reality."}, {"start": 146.72, "end": 152.28, "text": " Of course, a glass of spilled water is very unlikely to suddenly take the shape of a human"}, {"start": 152.28, "end": 158.4, "text": " face so we have to introduce changes to the simulation itself but at the same time it"}, {"start": 158.4, "end": 162.0, "text": " still has to look as if it could happen in nature."}, {"start": 162.0, "end": 166.8, "text": " If you wish to know more about my work here, the full thesis and the source code is available"}, {"start": 166.8, "end": 172.44, "text": " in the video description and one of my kind students has even implemented it in Blunder."}, {"start": 172.44, "end": 175.72, "text": " So this problem is obscenely difficult."}, {"start": 175.72, "end": 180.60000000000002, "text": " And you can now guess what's next for this differentiable technique, fluid control."}, {"start": 180.60000000000002, "end": 186.04000000000002, "text": " It starts out with a piece of simulated ink with a checkerboard pattern and it exerts"}, {"start": 186.04000000000002, "end": 191.92000000000002, "text": " just the appropriate forces so that it forms exactly the Yin Yang symbol shortly after."}, {"start": 191.92, "end": 197.67999999999998, "text": " I am shocked by how such a general system can perform something of this complexity."}, {"start": 197.67999999999998, "end": 202.72, "text": " Having worked on this problem for a while, I can tell you that this is immensely difficult."}, {"start": 202.72, "end": 203.72, "text": " Amazing."}, {"start": 203.72, "end": 207.0, "text": " And hold on to your papers because it can do even more."}, {"start": 207.0, "end": 212.0, "text": " In this example, it adds carefully crafted ripples to the water to make sure that it ends"}, {"start": 212.0, "end": 218.48, "text": " up in a state that distorts the image of the squirrel in a way that a powerful and well-known"}, {"start": 218.48, "end": 223.67999999999998, "text": " neural network sees it not as a squirrel, but as a goldfish."}, {"start": 223.67999999999998, "end": 227.35999999999999, "text": " This thing is basically a victory lap in the paper."}, {"start": 227.35999999999999, "end": 230.2, "text": " It is so powerful, it's not even funny."}, {"start": 230.2, "end": 235.23999999999998, "text": " You can just make up some problems that sound completely impossible and it rips right through"}, {"start": 235.23999999999998, "end": 236.23999999999998, "text": " them."}, {"start": 236.23999999999998, "end": 239.0, "text": " The full source code of this work is also available."}, {"start": 239.0, "end": 242.56, "text": " By the way, the first author of this paper is Yuan Ming-Hu."}, {"start": 242.56, "end": 245.72, "text": " His work was showcased several times in this series."}, {"start": 245.72, "end": 250.8, "text": " He talked about his amazing yellow simulation that was implemented in so few lines of"}, {"start": 250.8, "end": 253.92, "text": " code it almost fits on a business card."}, {"start": 253.92, "end": 257.04, "text": " I said it in a previous episode and I will say it again."}, {"start": 257.04, "end": 262.2, "text": " I can't wait to see more and more papers in differentiable rendering and simulations."}, {"start": 262.2, "end": 266.8, "text": " And as this work leaves plenty of room for creativity for novel problem definitions,"}, {"start": 266.8, "end": 269.16, "text": " I'd love to hear what you think about it."}, {"start": 269.16, "end": 271.2, "text": " What else could this be used for?"}, {"start": 271.2, "end": 276.08, "text": " In video games faster than other learning based techniques, anything else, let me know"}, {"start": 276.08, "end": 277.84, "text": " in the comments below."}, {"start": 277.84, "end": 279.56, "text": " What a time to be alive."}, {"start": 279.56, "end": 282.76, "text": " This episode has been supported by weights and biases."}, {"start": 282.76, "end": 287.44, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 287.44, "end": 292.36, "text": " It can save you a ton of time and money in these projects and is being used by OpenAI,"}, {"start": 292.36, "end": 295.48, "text": " Toyota Research, Stanford and Berkeley."}, {"start": 295.48, "end": 300.68, "text": " It is really easy to set up so much so that they have made an instrumentation for this exact"}, {"start": 300.68, "end": 303.40000000000003, "text": " paper we have talked about in this episode."}, {"start": 303.40000000000003, "end": 304.64, "text": " Have a look here."}, {"start": 304.64, "end": 311.6, "text": " Make sure to visit them through whendb.com slash papers, www.wndb.com slash papers or just"}, {"start": 311.6, "end": 315.8, "text": " click the link in the video description and you can get a free demo today."}, {"start": 315.8, "end": 319.76, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 319.76, "end": 331.84, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=o_DhNqHazKY
This Neural Network Combines Motion Capture and Physics
❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 📝 The paper "DReCon: Data-Driven responsive Control of Physics-Based Characters" is available here: - https://montreal.ubisoft.com/en/drecon-data-driven-responsive-control-of-physics-based-characters/ - https://static-wordpress.akamaized.net/montreal.ubisoft.com/wp-content/uploads/2019/11/13214229/DReCon.pdf 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is two minute papers with Karo Zsolnai-Fahir. In this series, we often talk about computer animation and physical simulations, and these episodes are typically about one or the other. You see, it is possible to teach a simulated AI agent to lift weights and jump really high using physical simulations to make sure that the movements and forces are accurate. The simulation side is always looking for correctness. However, let's not forget that things also have to look good. Animation studios are paying a fortune to record motion capture data from real humans and sometimes even dogs to make sure that these movements are visually appealing. So is it possible to create something that reacts to our commands with the controller, looks good, and also adheres to physics? Well, have a look. This work was developed at Ubisoft LaForge. It responds to our input via the controller and the output animations are fluid and natural. Since it relies on a technique called deep reinforcement learning, it requires training. You see that early on, the blue agent is trying to imitate the white character and it is not doing well at all. It basically looks like me when going to bed after reading papers all night. The white agent's movement is not physically simulated and was built using a motion database with only 10 minutes of animation data. This is the one that is in the looks good category. Or it would look really good if it wasn't pacing around like a drunkard, so the question naturally arises who in their right minds would control a character like this. Well, of course, no one. This sequence was generated by an artificial worst-case player which is a nightmare situation for NEA AI to reproduce. Early on, it indeed is a nightmare. However, after 30 hours of training, the blue agent learned to reproduce the motion of the white character while being physically simulated. So, what is the advantage of that? Well, for instance, it can interact with the scene better and is robust against perturbations. This means that it can rapidly recover from undesirable positions. This can be validated via something that the paper calls impact testing. Are you thinking what I am thinking? I hope so, because I am thinking about throwing blocks at this virtual agent, one of our favorite pastimes at two minute papers and it will be able to handle them. Whoops! Well, most of them anyway. It also reacts to a change in direction much quicker than previous agents. If all that was not amazing enough, the whole control system is very light and takes only a few microseconds, most of which is spent by not even the control part, but the physics simulation. So, with the power of computer graphics and machine learning research, animation and physics can now be combined beautifully, it does not limit controller responsiveness, looks very realistic and it is very likely that we'll see this technique in action in future Ubisoft games. Outstanding This video was supported by you on Patreon. If you wish to watch these videos in Early Access or get your name immortalized in the video description, make sure to go to patreon.com slash two minute papers and pick up one of those cool perks or we are also test driving the Early Access program here on YouTube. Just go ahead and click the join button or use the link in the description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.24, "text": " Dear Fellow Scholars, this is two minute papers with Karo Zsolnai-Fahir."}, {"start": 4.24, "end": 9.02, "text": " In this series, we often talk about computer animation and physical simulations, and these"}, {"start": 9.02, "end": 12.96, "text": " episodes are typically about one or the other."}, {"start": 12.96, "end": 18.72, "text": " You see, it is possible to teach a simulated AI agent to lift weights and jump really high"}, {"start": 18.72, "end": 24.04, "text": " using physical simulations to make sure that the movements and forces are accurate."}, {"start": 24.04, "end": 27.48, "text": " The simulation side is always looking for correctness."}, {"start": 27.48, "end": 31.880000000000003, "text": " However, let's not forget that things also have to look good."}, {"start": 31.880000000000003, "end": 36.96, "text": " Animation studios are paying a fortune to record motion capture data from real humans and"}, {"start": 36.96, "end": 42.04, "text": " sometimes even dogs to make sure that these movements are visually appealing."}, {"start": 42.04, "end": 47.32, "text": " So is it possible to create something that reacts to our commands with the controller, looks"}, {"start": 47.32, "end": 50.68, "text": " good, and also adheres to physics?"}, {"start": 50.68, "end": 52.6, "text": " Well, have a look."}, {"start": 52.6, "end": 55.24, "text": " This work was developed at Ubisoft LaForge."}, {"start": 55.24, "end": 61.480000000000004, "text": " It responds to our input via the controller and the output animations are fluid and natural."}, {"start": 61.480000000000004, "end": 66.72, "text": " Since it relies on a technique called deep reinforcement learning, it requires training."}, {"start": 66.72, "end": 71.72, "text": " You see that early on, the blue agent is trying to imitate the white character and it is"}, {"start": 71.72, "end": 73.36, "text": " not doing well at all."}, {"start": 73.36, "end": 77.96000000000001, "text": " It basically looks like me when going to bed after reading papers all night."}, {"start": 77.96000000000001, "end": 83.36, "text": " The white agent's movement is not physically simulated and was built using a motion database"}, {"start": 83.36, "end": 86.16, "text": " with only 10 minutes of animation data."}, {"start": 86.16, "end": 89.48, "text": " This is the one that is in the looks good category."}, {"start": 89.48, "end": 94.56, "text": " Or it would look really good if it wasn't pacing around like a drunkard, so the question"}, {"start": 94.56, "end": 99.8, "text": " naturally arises who in their right minds would control a character like this."}, {"start": 99.8, "end": 102.0, "text": " Well, of course, no one."}, {"start": 102.0, "end": 107.16, "text": " This sequence was generated by an artificial worst-case player which is a nightmare situation"}, {"start": 107.16, "end": 109.32, "text": " for NEA AI to reproduce."}, {"start": 109.32, "end": 112.08, "text": " Early on, it indeed is a nightmare."}, {"start": 112.08, "end": 117.88, "text": " However, after 30 hours of training, the blue agent learned to reproduce the motion of"}, {"start": 117.88, "end": 121.8, "text": " the white character while being physically simulated."}, {"start": 121.8, "end": 124.2, "text": " So, what is the advantage of that?"}, {"start": 124.2, "end": 130.4, "text": " Well, for instance, it can interact with the scene better and is robust against perturbations."}, {"start": 130.4, "end": 134.44, "text": " This means that it can rapidly recover from undesirable positions."}, {"start": 134.44, "end": 139.64, "text": " This can be validated via something that the paper calls impact testing."}, {"start": 139.64, "end": 141.68, "text": " Are you thinking what I am thinking?"}, {"start": 141.68, "end": 147.48000000000002, "text": " I hope so, because I am thinking about throwing blocks at this virtual agent, one of our favorite"}, {"start": 147.48000000000002, "end": 152.08, "text": " pastimes at two minute papers and it will be able to handle them."}, {"start": 152.08, "end": 153.08, "text": " Whoops!"}, {"start": 153.08, "end": 156.24, "text": " Well, most of them anyway."}, {"start": 156.24, "end": 161.20000000000002, "text": " It also reacts to a change in direction much quicker than previous agents."}, {"start": 161.20000000000002, "end": 166.36, "text": " If all that was not amazing enough, the whole control system is very light and takes"}, {"start": 166.36, "end": 172.16000000000003, "text": " only a few microseconds, most of which is spent by not even the control part, but the"}, {"start": 172.16000000000003, "end": 173.56, "text": " physics simulation."}, {"start": 173.56, "end": 178.52, "text": " So, with the power of computer graphics and machine learning research, animation and"}, {"start": 178.52, "end": 184.0, "text": " physics can now be combined beautifully, it does not limit controller responsiveness,"}, {"start": 184.0, "end": 188.4, "text": " looks very realistic and it is very likely that we'll see this technique in action in"}, {"start": 188.4, "end": 190.44000000000003, "text": " future Ubisoft games."}, {"start": 190.44000000000003, "end": 191.44000000000003, "text": " Outstanding"}, {"start": 191.44000000000003, "end": 194.60000000000002, "text": " This video was supported by you on Patreon."}, {"start": 194.6, "end": 199.35999999999999, "text": " If you wish to watch these videos in Early Access or get your name immortalized in the"}, {"start": 199.35999999999999, "end": 205.28, "text": " video description, make sure to go to patreon.com slash two minute papers and pick up one of"}, {"start": 205.28, "end": 211.12, "text": " those cool perks or we are also test driving the Early Access program here on YouTube."}, {"start": 211.12, "end": 215.35999999999999, "text": " Just go ahead and click the join button or use the link in the description."}, {"start": 215.36, "end": 227.44000000000003, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=O-52enqUSNw
Is a Realistic Water Bubble Simulation Possible?
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Their blog post and report on 3D segmentation is available here: https://app.wandb.ai/nbaryd/SparseConvNet-examples_3d_segmentation/reports?view=nbaryd%2FSemantic%20Segmentation%20of%203D%20Point%20Clouds 📝 The paper "Unified Spray, Foam and Bubbles for Particle-Based Fluids" is available here: - https://pdfs.semanticscholar.org/72ec/134f3c87c543be5f95330f73f4eb383c5511.pdf - https://cg.informatik.uni-freiburg.de/publications/2012_CGI_sprayFoamBubbles.pdf The FLIP Fluids plugin is available here: https://blendermarket.com/products/flipfluids If you wish to understand and implement a simple, real-time fluid simulator, you can check out my thesis here. It runs on your GPU and comes with source code: https://users.cg.tuwien.ac.at/zsolnai/gfx/fluid_control_msc_thesis/ If you are yearning for more, Doyub Kim's Fluid Engine Development is available here: https://doyub.com/fluid-engine-development/ Ryan Guy's notes on their FLIP Fluids implementation differences from the paper: - The paper mentions gathering data from nearby particles in some calculations. Our simulator does not perform a nearest-neighbour particle search so we've made some modifications to to avoid this search. - Our whitewater generator uses the Signed Distance Field (SDF) of the liquid surface to derive the surface curvature for locating wavecrests. - The SDF is also used to test the depth of the particles, such as if they are located inside or outside of the liquid volume. - Instead of using the particles to calculate velocity difference (trapped air potential), we are using velocity values that are stored on the grid.  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: Ryan Guy and Dennis Fassbaender Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. If we study the laws of fluid motion from physics and write a computer program that contains these laws, we can create beautiful simulations like the one you see here. The amount of detail we can simulate with these programs is increasing every year, not only due to the fact that hardware improves over time, but also the pace of progress in computer graphics research is truly remarkable. However, when talking about fluid simulations, we often see a paper produce a piece of geometry that evolves over time, and of course the more detailed this geometry is, the better. However, look at this. It is detailed, but something is really missing here. Do you see it? Well, let's look at the revised version of this simulation to find out what it is. Yes, form, spray, and bubble particles are now present, and the quality of the simulation just got elevated to the next level. Also, if you look at the source text, you see that this is a paper from 2012, and it describes how to add these effects to a fluid simulation. So, why are we talking about a paper that's about 8 years old? Not only that, but this work was not published at one of the most prestigious journals. Not even close. So, why? Well, you'll find out in a moment, but I have to tell you that I just got to know about this paper a few days ago, and it is so good it has single-handedly changed the way I think about research. Note that a variant of this paper has been implemented in a blender plugin called Flip Fluids. Blender is a free and open source modeler program, which is a complete powerhouse. I love it. And this plugin embeds this work into a modern framework, and boy, does it come to life in there. I have rerun one of their simulations and rendered a high resolution animation with light transport. The fluid simulation took about 8 hours, and as always, I went a little overboard with the light transport that took about 40 hours. Have a look. It is unreal how good it looks. My goodness. It is one of the miracles of the world that we can put a piece of silicon in our machines and through the power of science, explain fluid dynamics to it so well that such a simulation can come out of it. I have been working on these for many years now, and I am still shocked by the level of progress in computer graphics research. Let's talk about three important aspects of this work. First, it proposes one unified technique to add foam, spray, and bubbles in one go to the fluid simulation. One technique to model all three. In the paper, they are collectively called diffuse particles, and if these particles are deeply underwater, they will be classified as bubbles. If they are on the surface of the water, they will be foam particles, and if they are further above the surface, we will call them spray particles. With one method, we get all three of those. Lovely. Two, when I had shown you this footage, with and without the diffuse particles, normally I would need to resimulate the whole fluid domain to add these advanced effects, but this is not the case at all. These particles can be added as a post-processing step, which means that I was able to just run the simulation once, and then decide whether to use them or not. Just one click, and here it is, with the particles removed. Absolutely amazing. And three, perhaps the most important part, this technique is so simple I could hardly believe the paper when I saw it. You see, normally, to be able to simulate the formation of bubbles or foam, we would need to compute the waybar numbers, which requires expensive surface tangent computations, and more. Instead, the paper for fits that, and goes with the notion that bubbles and foam appear at regions where air gets trapped within the fluid. On the back of this knowledge, they note that wave crests are an example of that, and propose a method to find these wave crests by looking for regions where the curvature of the fluid geometry is high and locally convex. Both of these can be found through very simple expressions. Finally, air is also trapped when fluid particles move rapidly towards each other, which is also super simple to compute and evaluate. The whole thing can be implemented in a day, and it leads to absolutely killer fluid animations. You see, I have a great deal of admiration for a 20-page long technique that models something very difficult perfectly, but I have at least as much admiration for an almost trivially simple method that gets us to 80% of the perfect solution. This paper is the latter. I love it. This really changed my thinking not only about fluid simulation papers, but this paper is so good, it challenged how I think about research in general. It is an honor to be able to talk about beautiful works like this to you, so thank you so much for coming and listening to these videos. Note that the paper does more than what we've talked about here, it also proposes a method to compute the lifetime of these particles, tells us how they get evacuated by water and more. Make sure to check out the paper in the description for more on that. If you're interested, go and try a blender. That tool is completely free for everyone to use. I have been using it for around a decade now, and it is truly incredible that something like this exists as a community effort. The Flip Fluids plugin is a paid edition. If one pays for it, it can be used immediately, or if you spend a little time, you can compile it yourself, and this way you can get it for free. Respect for the plugin authors for making such a gentle business model. If you don't want to do any of those, even blender has a usable build-in fluid simulator. You can do incredible things with it, but it can produce diffuse particles. I am still stunned by how simple and powerful this technique is. The lesson here is that you can really find jumps anywhere, not just around the most prestigious research venues. I hope you got inspired by this, and if you wish to understand how these fluids work some more, or write your own simulator, I put a link to my master's thesis where I try to explain the whole thing as intuitively as possible, and it also comes with a full source code, free of charge, for a simulator that runs on your graphics card. If you feel so voracious that even that's not enough, I will also highly recommend Dojo Kim's book on fluid engine development. That one also comes with free source code. This episode has been supported by weights and biases. Here you see their beautiful final report on a point cloud classification project of theirs, and see how using different learning rates and other parameters influences the final results. Wates and biases provides tools to track your experiments in your deep learning projects. You can save you a ton of time and money in these projects, and is being used by open AI to your research, Stanford, and Berkeley. Make sure to visit them through wendeebe.com slash papers, w-a-n-d-b.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir."}, {"start": 4.5600000000000005, "end": 9.8, "text": " If we study the laws of fluid motion from physics and write a computer program that contains"}, {"start": 9.8, "end": 14.280000000000001, "text": " these laws, we can create beautiful simulations like the one you see here."}, {"start": 14.280000000000001, "end": 19.080000000000002, "text": " The amount of detail we can simulate with these programs is increasing every year, not only"}, {"start": 19.080000000000002, "end": 24.36, "text": " due to the fact that hardware improves over time, but also the pace of progress in computer"}, {"start": 24.36, "end": 27.080000000000002, "text": " graphics research is truly remarkable."}, {"start": 27.08, "end": 32.519999999999996, "text": " However, when talking about fluid simulations, we often see a paper produce a piece of geometry"}, {"start": 32.519999999999996, "end": 37.879999999999995, "text": " that evolves over time, and of course the more detailed this geometry is, the better."}, {"start": 37.879999999999995, "end": 40.239999999999995, "text": " However, look at this."}, {"start": 40.239999999999995, "end": 43.76, "text": " It is detailed, but something is really missing here."}, {"start": 43.76, "end": 44.959999999999994, "text": " Do you see it?"}, {"start": 44.959999999999994, "end": 49.8, "text": " Well, let's look at the revised version of this simulation to find out what it is."}, {"start": 49.8, "end": 55.519999999999996, "text": " Yes, form, spray, and bubble particles are now present, and the quality of the simulation"}, {"start": 55.52, "end": 58.120000000000005, "text": " just got elevated to the next level."}, {"start": 58.120000000000005, "end": 64.08, "text": " Also, if you look at the source text, you see that this is a paper from 2012, and it describes"}, {"start": 64.08, "end": 67.16, "text": " how to add these effects to a fluid simulation."}, {"start": 67.16, "end": 72.0, "text": " So, why are we talking about a paper that's about 8 years old?"}, {"start": 72.0, "end": 77.04, "text": " Not only that, but this work was not published at one of the most prestigious journals."}, {"start": 77.04, "end": 78.04, "text": " Not even close."}, {"start": 78.04, "end": 79.04, "text": " So, why?"}, {"start": 79.04, "end": 84.12, "text": " Well, you'll find out in a moment, but I have to tell you that I just got to know about"}, {"start": 84.12, "end": 90.48, "text": " this paper a few days ago, and it is so good it has single-handedly changed the way I think"}, {"start": 90.48, "end": 91.56, "text": " about research."}, {"start": 91.56, "end": 97.88000000000001, "text": " Note that a variant of this paper has been implemented in a blender plugin called Flip Fluids."}, {"start": 97.88000000000001, "end": 102.52000000000001, "text": " Blender is a free and open source modeler program, which is a complete powerhouse."}, {"start": 102.52000000000001, "end": 103.60000000000001, "text": " I love it."}, {"start": 103.60000000000001, "end": 109.04, "text": " And this plugin embeds this work into a modern framework, and boy, does it come to life"}, {"start": 109.04, "end": 110.04, "text": " in there."}, {"start": 110.04, "end": 114.52000000000001, "text": " I have rerun one of their simulations and rendered a high resolution animation with"}, {"start": 114.52000000000001, "end": 115.76, "text": " light transport."}, {"start": 115.76, "end": 120.76, "text": " The fluid simulation took about 8 hours, and as always, I went a little overboard with"}, {"start": 120.76, "end": 124.08000000000001, "text": " the light transport that took about 40 hours."}, {"start": 124.08000000000001, "end": 125.16000000000001, "text": " Have a look."}, {"start": 125.16000000000001, "end": 127.88000000000001, "text": " It is unreal how good it looks."}, {"start": 127.88000000000001, "end": 129.16, "text": " My goodness."}, {"start": 129.16, "end": 133.96, "text": " It is one of the miracles of the world that we can put a piece of silicon in our machines"}, {"start": 133.96, "end": 139.44, "text": " and through the power of science, explain fluid dynamics to it so well that such a simulation"}, {"start": 139.44, "end": 140.44, "text": " can come out of it."}, {"start": 140.44, "end": 144.72, "text": " I have been working on these for many years now, and I am still shocked by the level of"}, {"start": 144.72, "end": 147.4, "text": " progress in computer graphics research."}, {"start": 147.4, "end": 150.48, "text": " Let's talk about three important aspects of this work."}, {"start": 150.48, "end": 156.68, "text": " First, it proposes one unified technique to add foam, spray, and bubbles in one go to"}, {"start": 156.68, "end": 158.32, "text": " the fluid simulation."}, {"start": 158.32, "end": 160.76, "text": " One technique to model all three."}, {"start": 160.76, "end": 165.4, "text": " In the paper, they are collectively called diffuse particles, and if these particles are"}, {"start": 165.4, "end": 169.16, "text": " deeply underwater, they will be classified as bubbles."}, {"start": 169.16, "end": 173.52, "text": " If they are on the surface of the water, they will be foam particles, and if they are"}, {"start": 173.52, "end": 177.64, "text": " further above the surface, we will call them spray particles."}, {"start": 177.64, "end": 180.48, "text": " With one method, we get all three of those."}, {"start": 180.48, "end": 181.48, "text": " Lovely."}, {"start": 181.48, "end": 186.84, "text": " Two, when I had shown you this footage, with and without the diffuse particles, normally"}, {"start": 186.84, "end": 191.51999999999998, "text": " I would need to resimulate the whole fluid domain to add these advanced effects, but"}, {"start": 191.51999999999998, "end": 193.68, "text": " this is not the case at all."}, {"start": 193.68, "end": 198.48, "text": " These particles can be added as a post-processing step, which means that I was able to just"}, {"start": 198.48, "end": 203.88, "text": " run the simulation once, and then decide whether to use them or not."}, {"start": 203.88, "end": 208.2, "text": " Just one click, and here it is, with the particles removed."}, {"start": 208.2, "end": 209.6, "text": " Absolutely amazing."}, {"start": 209.6, "end": 214.95999999999998, "text": " And three, perhaps the most important part, this technique is so simple I could hardly"}, {"start": 214.95999999999998, "end": 217.2, "text": " believe the paper when I saw it."}, {"start": 217.2, "end": 222.28, "text": " You see, normally, to be able to simulate the formation of bubbles or foam, we would need"}, {"start": 222.28, "end": 227.67999999999998, "text": " to compute the waybar numbers, which requires expensive surface tangent computations, and"}, {"start": 227.68, "end": 228.68, "text": " more."}, {"start": 228.68, "end": 234.20000000000002, "text": " Instead, the paper for fits that, and goes with the notion that bubbles and foam appear"}, {"start": 234.20000000000002, "end": 237.68, "text": " at regions where air gets trapped within the fluid."}, {"start": 237.68, "end": 242.20000000000002, "text": " On the back of this knowledge, they note that wave crests are an example of that, and"}, {"start": 242.20000000000002, "end": 247.44, "text": " propose a method to find these wave crests by looking for regions where the curvature"}, {"start": 247.44, "end": 251.4, "text": " of the fluid geometry is high and locally convex."}, {"start": 251.4, "end": 254.52, "text": " Both of these can be found through very simple expressions."}, {"start": 254.52, "end": 260.76, "text": " Finally, air is also trapped when fluid particles move rapidly towards each other, which is also"}, {"start": 260.76, "end": 263.56, "text": " super simple to compute and evaluate."}, {"start": 263.56, "end": 269.56, "text": " The whole thing can be implemented in a day, and it leads to absolutely killer fluid animations."}, {"start": 269.56, "end": 275.08, "text": " You see, I have a great deal of admiration for a 20-page long technique that models something"}, {"start": 275.08, "end": 281.40000000000003, "text": " very difficult perfectly, but I have at least as much admiration for an almost trivially"}, {"start": 281.4, "end": 286.08, "text": " simple method that gets us to 80% of the perfect solution."}, {"start": 286.08, "end": 287.91999999999996, "text": " This paper is the latter."}, {"start": 287.91999999999996, "end": 289.47999999999996, "text": " I love it."}, {"start": 289.47999999999996, "end": 294.23999999999995, "text": " This really changed my thinking not only about fluid simulation papers, but this paper"}, {"start": 294.23999999999995, "end": 298.67999999999995, "text": " is so good, it challenged how I think about research in general."}, {"start": 298.67999999999995, "end": 303.35999999999996, "text": " It is an honor to be able to talk about beautiful works like this to you, so thank you so much"}, {"start": 303.35999999999996, "end": 306.08, "text": " for coming and listening to these videos."}, {"start": 306.08, "end": 310.88, "text": " Note that the paper does more than what we've talked about here, it also proposes a method"}, {"start": 310.88, "end": 316.36, "text": " to compute the lifetime of these particles, tells us how they get evacuated by water and"}, {"start": 316.36, "end": 317.36, "text": " more."}, {"start": 317.36, "end": 320.68, "text": " Make sure to check out the paper in the description for more on that."}, {"start": 320.68, "end": 323.32, "text": " If you're interested, go and try a blender."}, {"start": 323.32, "end": 326.15999999999997, "text": " That tool is completely free for everyone to use."}, {"start": 326.15999999999997, "end": 330.48, "text": " I have been using it for around a decade now, and it is truly incredible that something"}, {"start": 330.48, "end": 333.24, "text": " like this exists as a community effort."}, {"start": 333.24, "end": 335.8, "text": " The Flip Fluids plugin is a paid edition."}, {"start": 335.8, "end": 341.68, "text": " If one pays for it, it can be used immediately, or if you spend a little time, you can compile"}, {"start": 341.68, "end": 345.24, "text": " it yourself, and this way you can get it for free."}, {"start": 345.24, "end": 348.72, "text": " Respect for the plugin authors for making such a gentle business model."}, {"start": 348.72, "end": 353.48, "text": " If you don't want to do any of those, even blender has a usable build-in fluid simulator."}, {"start": 353.48, "end": 358.2, "text": " You can do incredible things with it, but it can produce diffuse particles."}, {"start": 358.2, "end": 362.28000000000003, "text": " I am still stunned by how simple and powerful this technique is."}, {"start": 362.28, "end": 366.79999999999995, "text": " The lesson here is that you can really find jumps anywhere, not just around the most"}, {"start": 366.79999999999995, "end": 368.67999999999995, "text": " prestigious research venues."}, {"start": 368.67999999999995, "end": 372.96, "text": " I hope you got inspired by this, and if you wish to understand how these fluids work"}, {"start": 372.96, "end": 377.84, "text": " some more, or write your own simulator, I put a link to my master's thesis where I"}, {"start": 377.84, "end": 382.84, "text": " try to explain the whole thing as intuitively as possible, and it also comes with a full"}, {"start": 382.84, "end": 387.84, "text": " source code, free of charge, for a simulator that runs on your graphics card."}, {"start": 387.84, "end": 393.2, "text": " If you feel so voracious that even that's not enough, I will also highly recommend Dojo"}, {"start": 393.2, "end": 395.79999999999995, "text": " Kim's book on fluid engine development."}, {"start": 395.79999999999995, "end": 398.44, "text": " That one also comes with free source code."}, {"start": 398.44, "end": 401.64, "text": " This episode has been supported by weights and biases."}, {"start": 401.64, "end": 406.52, "text": " Here you see their beautiful final report on a point cloud classification project of theirs,"}, {"start": 406.52, "end": 412.64, "text": " and see how using different learning rates and other parameters influences the final results."}, {"start": 412.64, "end": 417.2, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 417.2, "end": 421.84, "text": " You can save you a ton of time and money in these projects, and is being used by open"}, {"start": 421.84, "end": 425.59999999999997, "text": " AI to your research, Stanford, and Berkeley."}, {"start": 425.59999999999997, "end": 432.2, "text": " Make sure to visit them through wendeebe.com slash papers, w-a-n-d-b.com slash papers, or"}, {"start": 432.2, "end": 436.44, "text": " just click the link in the video description, and you can get a free demo today."}, {"start": 436.44, "end": 440.32, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 440.32, "end": 447.32, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=eTUmmW4ispA
This Neural Network Performs Foveated Rendering
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos" is available here: https://research.fb.com/publications/deepfovea-neural-reconstruction-for-foveated-rendering-and-video-compression-using-learned-statistics-of-natural-videos/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1893783/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #vr
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonai Fahir. As humans, when looking at the world, our eyes and brain does not process the entirety of the image we have in front of us, but plays an interesting trick on us. We can only see fine details in a tiny, tiny, for-vated region that we are gazing at, while our peripheral or indirect vision only sees a sparse, blurry version of the image, and the rest of the information is filled in by our brain. This is very efficient because our vision system only has to process a tiny fraction of the visual data that is in front of us, and it still enables us to interact with the world around us. So, what if we would take a learning algorithm that does something similar for digital videos? Imagine that we would need to render a sparse video with only every tenth pixel filled with information and some kind of neural network-based technique would be able to reconstruct the full image similarly to what our brain does. Yes, that sounds great, but that is very little information to reconstruct an image from. So is it possible? Well, hold on to your papers because this new work can reconstruct a near-perfect image by looking at less than 10% of the input pixels. So we have this as an input, and we get this. Wow! What is happening here is called a neural reconstruction of foviated rendering data, or you are welcome to refer to it as foviated reconstruction in short during your conversations over dinner. The scrambled text part here is quite interesting. One might think that, well, it could be better. However, given the fact that if you look at the appropriate place in the sparse image, I not only cannot read the text, I am not even sure if I see anything that indicates that there is a text there at all. So far, the example assumed that we are looking at a particular point in the middle of the screen, and the ultimate question is, how does this deal with a real-life case where the user is looking around? Well, let's see. This is the input, and the reconstruction. Witchcraft. Let's have a look at some more results. Note that this method is developed for head-mounted displays where we have information on where the user is looking over time, and this can make all the difference in terms of optimization. You see a comparison here against a method labeled as multi-resolution. This is from a paper by the name foviated 3D graphics, and you can see that the difference in the quality of the reconstruction is truly remarkable. Additionally, it has been trained on 350,000 short natural video sequences and the whole thing runs in real time. Also, note that we often discuss image-impainting methods in this series. For instance, what you see here is the legendary patch match algorithm that is one of these, and it is able to fill in missing parts of an image. However, in image-impainting, most of the image is intact with smaller regions that are missing. This is even more difficult than image-impainting because the vast majority of the image is completely missing. The fact that we can now do this with learning-based methods is absolutely incredible. The first author of the paper is Anton Kaplanjan, who is a brilliant and very rigorous mathematician, so of course, the results are evaluated in detail both in terms of mathematics and with a user study. Make sure to have a look at the paper for more on that. We got to know each other with Anton during the days when all we did was light transport simulations all day, every day, and we're always speculating about potential projects and to migrate sadness, somehow, unfortunately, we never managed to work together for a full project. Again, congratulations, Anton. Beautiful work. What a time to be alive. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. They offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. Exactly the kind of works you see here in this series. If you feel inspired by these works and you wish to run your experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. To spin up your own GPU instance and receive a $20 free credit, visit Linode.com slash papers or click the link in the video description and use the promo code papers20 during sign-up. Give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.74, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonai Fahir."}, {"start": 4.74, "end": 9.98, "text": " As humans, when looking at the world, our eyes and brain does not process the entirety"}, {"start": 9.98, "end": 14.540000000000001, "text": " of the image we have in front of us, but plays an interesting trick on us."}, {"start": 14.540000000000001, "end": 20.02, "text": " We can only see fine details in a tiny, tiny, for-vated region that we are gazing at,"}, {"start": 20.02, "end": 26.18, "text": " while our peripheral or indirect vision only sees a sparse, blurry version of the image,"}, {"start": 26.18, "end": 29.900000000000002, "text": " and the rest of the information is filled in by our brain."}, {"start": 29.9, "end": 35.18, "text": " This is very efficient because our vision system only has to process a tiny fraction of"}, {"start": 35.18, "end": 40.66, "text": " the visual data that is in front of us, and it still enables us to interact with the"}, {"start": 40.66, "end": 42.22, "text": " world around us."}, {"start": 42.22, "end": 48.980000000000004, "text": " So, what if we would take a learning algorithm that does something similar for digital videos?"}, {"start": 48.980000000000004, "end": 54.68, "text": " Imagine that we would need to render a sparse video with only every tenth pixel filled"}, {"start": 54.68, "end": 59.86, "text": " with information and some kind of neural network-based technique would be able to reconstruct"}, {"start": 59.86, "end": 63.22, "text": " the full image similarly to what our brain does."}, {"start": 63.22, "end": 68.62, "text": " Yes, that sounds great, but that is very little information to reconstruct an image from."}, {"start": 68.62, "end": 70.38, "text": " So is it possible?"}, {"start": 70.38, "end": 75.9, "text": " Well, hold on to your papers because this new work can reconstruct a near-perfect image"}, {"start": 75.9, "end": 79.58, "text": " by looking at less than 10% of the input pixels."}, {"start": 79.58, "end": 83.42, "text": " So we have this as an input, and we get this."}, {"start": 83.42, "end": 84.94, "text": " Wow!"}, {"start": 84.94, "end": 90.98, "text": " What is happening here is called a neural reconstruction of foviated rendering data, or you are welcome"}, {"start": 90.98, "end": 96.58, "text": " to refer to it as foviated reconstruction in short during your conversations over dinner."}, {"start": 96.58, "end": 99.78, "text": " The scrambled text part here is quite interesting."}, {"start": 99.78, "end": 102.9, "text": " One might think that, well, it could be better."}, {"start": 102.9, "end": 107.46, "text": " However, given the fact that if you look at the appropriate place in the sparse image,"}, {"start": 107.46, "end": 112.62, "text": " I not only cannot read the text, I am not even sure if I see anything that indicates"}, {"start": 112.62, "end": 114.78, "text": " that there is a text there at all."}, {"start": 114.78, "end": 119.5, "text": " So far, the example assumed that we are looking at a particular point in the middle of the"}, {"start": 119.5, "end": 124.74000000000001, "text": " screen, and the ultimate question is, how does this deal with a real-life case where the"}, {"start": 124.74000000000001, "end": 126.7, "text": " user is looking around?"}, {"start": 126.7, "end": 128.46, "text": " Well, let's see."}, {"start": 128.46, "end": 132.74, "text": " This is the input, and the reconstruction."}, {"start": 132.74, "end": 133.74, "text": " Witchcraft."}, {"start": 133.74, "end": 135.86, "text": " Let's have a look at some more results."}, {"start": 135.86, "end": 140.94, "text": " Note that this method is developed for head-mounted displays where we have information on where"}, {"start": 140.94, "end": 147.78, "text": " the user is looking over time, and this can make all the difference in terms of optimization."}, {"start": 147.78, "end": 152.26, "text": " You see a comparison here against a method labeled as multi-resolution."}, {"start": 152.26, "end": 157.57999999999998, "text": " This is from a paper by the name foviated 3D graphics, and you can see that the difference"}, {"start": 157.57999999999998, "end": 161.38, "text": " in the quality of the reconstruction is truly remarkable."}, {"start": 161.38, "end": 167.98, "text": " Additionally, it has been trained on 350,000 short natural video sequences and the whole thing"}, {"start": 167.98, "end": 169.98, "text": " runs in real time."}, {"start": 169.98, "end": 174.73999999999998, "text": " Also, note that we often discuss image-impainting methods in this series."}, {"start": 174.73999999999998, "end": 180.26, "text": " For instance, what you see here is the legendary patch match algorithm that is one of these,"}, {"start": 180.26, "end": 183.66, "text": " and it is able to fill in missing parts of an image."}, {"start": 183.66, "end": 190.17999999999998, "text": " However, in image-impainting, most of the image is intact with smaller regions that are missing."}, {"start": 190.17999999999998, "end": 195.45999999999998, "text": " This is even more difficult than image-impainting because the vast majority of the image is completely"}, {"start": 195.45999999999998, "end": 196.45999999999998, "text": " missing."}, {"start": 196.46, "end": 201.5, "text": " The fact that we can now do this with learning-based methods is absolutely incredible."}, {"start": 201.5, "end": 207.46, "text": " The first author of the paper is Anton Kaplanjan, who is a brilliant and very rigorous mathematician,"}, {"start": 207.46, "end": 213.10000000000002, "text": " so of course, the results are evaluated in detail both in terms of mathematics and with"}, {"start": 213.10000000000002, "end": 214.3, "text": " a user study."}, {"start": 214.3, "end": 217.02, "text": " Make sure to have a look at the paper for more on that."}, {"start": 217.02, "end": 221.34, "text": " We got to know each other with Anton during the days when all we did was light transport"}, {"start": 221.34, "end": 227.02, "text": " simulations all day, every day, and we're always speculating about potential projects"}, {"start": 227.02, "end": 232.5, "text": " and to migrate sadness, somehow, unfortunately, we never managed to work together for a full"}, {"start": 232.5, "end": 233.5, "text": " project."}, {"start": 233.5, "end": 235.7, "text": " Again, congratulations, Anton."}, {"start": 235.7, "end": 236.7, "text": " Beautiful work."}, {"start": 236.7, "end": 238.54, "text": " What a time to be alive."}, {"start": 238.54, "end": 240.94, "text": " This episode has been supported by Linode."}, {"start": 240.94, "end": 244.66, "text": " Linode is the world's largest independent cloud computing provider."}, {"start": 244.66, "end": 251.18, "text": " They offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made"}, {"start": 251.18, "end": 256.22, "text": " for AI, scientific computing, and computer graphics projects."}, {"start": 256.22, "end": 259.02, "text": " Exactly the kind of works you see here in this series."}, {"start": 259.02, "end": 263.78000000000003, "text": " If you feel inspired by these works and you wish to run your experiments or deploy your"}, {"start": 263.78000000000003, "end": 268.82, "text": " already existing works through a simple and reliable hosting service, make sure to join"}, {"start": 268.82, "end": 273.3, "text": " over 800,000 other happy customers and choose Linode."}, {"start": 273.3, "end": 279.22, "text": " To spin up your own GPU instance and receive a $20 free credit, visit Linode.com slash"}, {"start": 279.22, "end": 285.86, "text": " papers or click the link in the video description and use the promo code papers20 during sign-up."}, {"start": 285.86, "end": 286.86, "text": " Give it a try today."}, {"start": 286.86, "end": 291.74, "text": " Our thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 291.74, "end": 318.5, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wsFgrzYwchQ
This Beautiful Fluid Simulator Warps Time…Kind Of 🌊
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "A Temporally Adaptive Material Point Method with Regional Time Stepping" is available here: http://taichi.graphics/wp-content/uploads/2018/06/asyncmpm.pdf Disney’s “A Material Point Method For Snow Simulation” is available here: Video: https://www.youtube.com/watch?v=O0kyDKu8K-k Paper: https://www.math.ucla.edu/~jteran/papers/SSCTS13.pdf 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir. The opening video sequence of this paper immediately starts with a beautiful snow simulation, which I presume is an homage to a legendary Disney paper from 2013 by the name a material point method for snow simulation, which, to the best of my knowledge, was used for the first Frozen movie. I was super excited to showcase that paper when this series started, however, unfortunately, I was unable to get the rights to do it, but I make sure to put a link to the original paper with the same scene in the video description if you're interested. Now, typically, we are looking to produce high-resolution simulations with lots of detail, however, this takes from hours to days to compute. So, how can we deal with this kind of complexity? Well, approximately 400 videos ago, in two-minute papers episode 10, we talked about this technique that introduced spatial adaptivity to this process. The adaptive part means that it made the simulation finer and coarser depending on what parts of the simulation are visible. The parts that we don't see can be run through a coarser simulation because we won't be able to see the difference. Very smart. The spatial part means that we use particles and subdivide the 3D space into grid points in which we compute the necessary quantities like velocities and pressures. This was a great paper on adaptive fluid simulations, but now look at this new paper. This one says that it is about temporal adaptivity. There are two issues that immediately arise. First, we don't know what temporal adaptivity means and even if we did, we'll find out that this is something that is almost impossible to pull off. Let me explain. There is a great deal of difficulty in choosing the right time steps for such a simulation. These simulations are run in a way that we check and resolve all the collisions and then we can advance the time forward by a tiny amount. This amount is called a time step and choosing the appropriate time step has always been a challenge. You see, if we set it to two large, we will be done faster and compute less, however, we will almost certainly miss some collisions because we skipped over them. It gets even worse because the simulation may end up in a state that is so incorrect that it is impossible to recover from and we have to throw the entire thing out. If we set it to two low, we get a more robust simulation, however, it will take from many hours to days to compute. So what does this temporal adaptivity mean exactly? Well, it means that there is not one global time step for the simulation, but time is advanced differently at different places. You see here this delta T, this means the numbers chosen for the time steps and the blue color coding means a simple region where there isn't much going on so we can get away with bigger time steps and less computation without missing important events. The red regions have to be simulated with smaller time steps because there is a lot going on and we would miss out on that. Once the new technique is called an asynchronous method because it is a crazy simulation where time advances in different amounts at different spatial regions. So how do we test this solution? Well, of course, ideally this should look the same as the synchronized simulation. So does it? You bet your papers it does. Look at that. Absolutely fantastic. And since we can get away with less computation, it is faster. How much faster? In the worst cases, 40% faster in the better ones, 10 times faster. So kind of my all-nighter fluid simulations can be done in one night, sign me up. What a time to be alive. This episode has been supported by Lambda. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU instances today. Thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.38, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir."}, {"start": 4.38, "end": 10.200000000000001, "text": " The opening video sequence of this paper immediately starts with a beautiful snow simulation,"}, {"start": 10.200000000000001, "end": 16.76, "text": " which I presume is an homage to a legendary Disney paper from 2013 by the name a material"}, {"start": 16.76, "end": 21.76, "text": " point method for snow simulation, which, to the best of my knowledge, was used for the"}, {"start": 21.76, "end": 23.32, "text": " first Frozen movie."}, {"start": 23.32, "end": 28.92, "text": " I was super excited to showcase that paper when this series started, however, unfortunately,"}, {"start": 28.92, "end": 32.9, "text": " I was unable to get the rights to do it, but I make sure to put a link to the original"}, {"start": 32.9, "end": 36.84, "text": " paper with the same scene in the video description if you're interested."}, {"start": 36.84, "end": 43.32, "text": " Now, typically, we are looking to produce high-resolution simulations with lots of detail, however,"}, {"start": 43.32, "end": 46.24, "text": " this takes from hours to days to compute."}, {"start": 46.24, "end": 49.44, "text": " So, how can we deal with this kind of complexity?"}, {"start": 49.44, "end": 55.88, "text": " Well, approximately 400 videos ago, in two-minute papers episode 10, we talked about this technique"}, {"start": 55.88, "end": 59.800000000000004, "text": " that introduced spatial adaptivity to this process."}, {"start": 59.800000000000004, "end": 65.32000000000001, "text": " The adaptive part means that it made the simulation finer and coarser depending on what"}, {"start": 65.32000000000001, "end": 67.8, "text": " parts of the simulation are visible."}, {"start": 67.8, "end": 72.48, "text": " The parts that we don't see can be run through a coarser simulation because we won't be"}, {"start": 72.48, "end": 74.48, "text": " able to see the difference."}, {"start": 74.48, "end": 75.48, "text": " Very smart."}, {"start": 75.48, "end": 80.92, "text": " The spatial part means that we use particles and subdivide the 3D space into grid points"}, {"start": 80.92, "end": 85.84, "text": " in which we compute the necessary quantities like velocities and pressures."}, {"start": 85.84, "end": 91.4, "text": " This was a great paper on adaptive fluid simulations, but now look at this new paper."}, {"start": 91.4, "end": 95.2, "text": " This one says that it is about temporal adaptivity."}, {"start": 95.2, "end": 97.96000000000001, "text": " There are two issues that immediately arise."}, {"start": 97.96000000000001, "end": 103.4, "text": " First, we don't know what temporal adaptivity means and even if we did, we'll find out"}, {"start": 103.4, "end": 107.52000000000001, "text": " that this is something that is almost impossible to pull off."}, {"start": 107.52000000000001, "end": 108.52000000000001, "text": " Let me explain."}, {"start": 108.52000000000001, "end": 113.84, "text": " There is a great deal of difficulty in choosing the right time steps for such a simulation."}, {"start": 113.84, "end": 119.44, "text": " These simulations are run in a way that we check and resolve all the collisions and then"}, {"start": 119.44, "end": 123.12, "text": " we can advance the time forward by a tiny amount."}, {"start": 123.12, "end": 127.88000000000001, "text": " This amount is called a time step and choosing the appropriate time step has always been a"}, {"start": 127.88000000000001, "end": 128.88, "text": " challenge."}, {"start": 128.88, "end": 135.32, "text": " You see, if we set it to two large, we will be done faster and compute less, however,"}, {"start": 135.32, "end": 139.6, "text": " we will almost certainly miss some collisions because we skipped over them."}, {"start": 139.6, "end": 144.6, "text": " It gets even worse because the simulation may end up in a state that is so incorrect"}, {"start": 144.6, "end": 149.56, "text": " that it is impossible to recover from and we have to throw the entire thing out."}, {"start": 149.56, "end": 155.44, "text": " If we set it to two low, we get a more robust simulation, however, it will take from many"}, {"start": 155.44, "end": 157.95999999999998, "text": " hours to days to compute."}, {"start": 157.95999999999998, "end": 161.16, "text": " So what does this temporal adaptivity mean exactly?"}, {"start": 161.16, "end": 166.28, "text": " Well, it means that there is not one global time step for the simulation, but time is"}, {"start": 166.28, "end": 169.52, "text": " advanced differently at different places."}, {"start": 169.52, "end": 174.76, "text": " You see here this delta T, this means the numbers chosen for the time steps and the blue color"}, {"start": 174.76, "end": 180.28, "text": " coding means a simple region where there isn't much going on so we can get away with bigger"}, {"start": 180.28, "end": 184.8, "text": " time steps and less computation without missing important events."}, {"start": 184.8, "end": 189.24, "text": " The red regions have to be simulated with smaller time steps because there is a lot going"}, {"start": 189.24, "end": 191.88, "text": " on and we would miss out on that."}, {"start": 191.88, "end": 197.28, "text": " Once the new technique is called an asynchronous method because it is a crazy simulation"}, {"start": 197.28, "end": 202.51999999999998, "text": " where time advances in different amounts at different spatial regions."}, {"start": 202.51999999999998, "end": 204.96, "text": " So how do we test this solution?"}, {"start": 204.96, "end": 210.35999999999999, "text": " Well, of course, ideally this should look the same as the synchronized simulation."}, {"start": 210.35999999999999, "end": 211.92, "text": " So does it?"}, {"start": 211.92, "end": 213.92, "text": " You bet your papers it does."}, {"start": 213.92, "end": 214.92, "text": " Look at that."}, {"start": 214.92, "end": 217.12, "text": " Absolutely fantastic."}, {"start": 217.12, "end": 221.2, "text": " And since we can get away with less computation, it is faster."}, {"start": 221.2, "end": 222.64, "text": " How much faster?"}, {"start": 222.64, "end": 228.48, "text": " In the worst cases, 40% faster in the better ones, 10 times faster."}, {"start": 228.48, "end": 234.35999999999999, "text": " So kind of my all-nighter fluid simulations can be done in one night, sign me up."}, {"start": 234.35999999999999, "end": 236.07999999999998, "text": " What a time to be alive."}, {"start": 236.07999999999998, "end": 238.51999999999998, "text": " This episode has been supported by Lambda."}, {"start": 238.51999999999998, "end": 243.76, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 243.76, "end": 245.92, "text": " check out Lambda GPU Cloud."}, {"start": 245.92, "end": 250.92, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 250.92, "end": 254.11999999999998, "text": " that they are offering GPU Cloud services as well."}, {"start": 254.11999999999998, "end": 261.2, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 261.2, "end": 266.2, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 266.2, "end": 271.52, "text": " And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 271.52, "end": 273.59999999999997, "text": " AWS and Azure."}, {"start": 273.59999999999997, "end": 278.91999999999996, "text": " Make sure to go to lambdaleps.com slash papers and sign up for one of their amazing GPU"}, {"start": 278.91999999999996, "end": 280.24, "text": " instances today."}, {"start": 280.24, "end": 283.52, "text": " Thanks to Lambda for helping us make better videos for you."}, {"start": 283.52, "end": 313.08, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=SIGQSgifs6s
Baking And Melting Chocolate Simulations Are Now Possible! 🍫
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Their blog post is available here: https://www.wandb.com/tutorial/build-a-neural-network 📝 The paper "A Thermomechanical Material Point Method for Baking and Cooking " is available here: https://www.math.ucla.edu/~myding/papers/baking_paper_final.pdf https://dl.acm.org/doi/10.1145/3355089.3356537 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifaher. This is one of those simulation papers where you can look at it for three seconds and immediately know what it's about. Let's try that. Clearly, expansion and baking is happening. And now, let's look inside. Hmm, yep, this is done. Clearly, this is a paper on simulating the process of baking, loving the idea. So how comprehensive is it? Well, for a proper baking procedure, the simulator also has to be able to deal with melting, solidification, dehydration, coloring, and much, much more. This requires developing a proper thermomechanical model where these materials are modeled as a collection of solids, water, and gas. Let's have a look at some more results. And we have to stop right here because I'd like to tell you that the information density on this deceivingly simple scene is just stunning. In the X-axis, from the left to right, we have a decreasing temperature in the oven, left being the hottest, and the chocolate chip cookies above are simulated with an earlier work from 2014. The ones in the bottom row are made with a new technique. You can see a different kind of shape change as we increase the temperature if we crank the oven up even more, and look there, even the chocolate chips are melting. Oh my goodness, what a paper. Talking about information density, you can also see here how these simulated pieces of dough of different viscosities react to different amounts of stress. Viscosity means the amount of resistance against deformation, therefore, as we go up, you can witness this kind of resistance increasing. Here you can see a cross-section of the bread, which shows the amount of heat everywhere. This not only teaches us why crust forms on the outside layer, but you can see how the amount of heat diffuses slowly into the inside. This is a maxed-out paper. By this, I mean the execution quality is through the roof, and the paper is considered done not when it looks alright, but when the idea is being pushed to the limit, and the work is as good as it can be without trivial ways to improve it. And the results are absolute witchcraft. Huge congratulations to the authors. In fact, double congratulations because it seems to me that this is only the second paper of Manguondink, the lead author, and it has been accepted to the SIGRAPH Asia conference, which is one of the greatest achievements a computer graphics researcher can dream of. The paper of such quality for the second try. Wow! This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI to your research, Stanford, and Berkeley. They have excellent tutorial videos. In this one, the CEO himself teaches you how to build your own neural network and more. Make sure to visit them through www.wndb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.44, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifaher."}, {"start": 4.44, "end": 9.52, "text": " This is one of those simulation papers where you can look at it for three seconds and immediately"}, {"start": 9.52, "end": 11.6, "text": " know what it's about."}, {"start": 11.6, "end": 13.120000000000001, "text": " Let's try that."}, {"start": 13.120000000000001, "end": 17.36, "text": " Clearly, expansion and baking is happening."}, {"start": 17.36, "end": 20.2, "text": " And now, let's look inside."}, {"start": 20.2, "end": 22.92, "text": " Hmm, yep, this is done."}, {"start": 22.92, "end": 28.96, "text": " Clearly, this is a paper on simulating the process of baking, loving the idea."}, {"start": 28.96, "end": 31.400000000000002, "text": " So how comprehensive is it?"}, {"start": 31.400000000000002, "end": 39.08, "text": " Well, for a proper baking procedure, the simulator also has to be able to deal with melting, solidification,"}, {"start": 39.08, "end": 42.88, "text": " dehydration, coloring, and much, much more."}, {"start": 42.88, "end": 48.72, "text": " This requires developing a proper thermomechanical model where these materials are modeled as"}, {"start": 48.72, "end": 54.0, "text": " a collection of solids, water, and gas."}, {"start": 54.0, "end": 56.040000000000006, "text": " Let's have a look at some more results."}, {"start": 56.04, "end": 61.28, "text": " And we have to stop right here because I'd like to tell you that the information density"}, {"start": 61.28, "end": 65.28, "text": " on this deceivingly simple scene is just stunning."}, {"start": 65.28, "end": 70.24, "text": " In the X-axis, from the left to right, we have a decreasing temperature in the oven, left"}, {"start": 70.24, "end": 75.56, "text": " being the hottest, and the chocolate chip cookies above are simulated with an earlier work"}, {"start": 75.56, "end": 77.28, "text": " from 2014."}, {"start": 77.28, "end": 80.52, "text": " The ones in the bottom row are made with a new technique."}, {"start": 80.52, "end": 85.16, "text": " You can see a different kind of shape change as we increase the temperature if we crank the"}, {"start": 85.16, "end": 91.03999999999999, "text": " oven up even more, and look there, even the chocolate chips are melting."}, {"start": 91.03999999999999, "end": 95.72, "text": " Oh my goodness, what a paper."}, {"start": 95.72, "end": 100.36, "text": " Talking about information density, you can also see here how these simulated pieces of"}, {"start": 100.36, "end": 105.32, "text": " dough of different viscosities react to different amounts of stress."}, {"start": 105.32, "end": 110.44, "text": " Viscosity means the amount of resistance against deformation, therefore, as we go up, you"}, {"start": 110.44, "end": 114.56, "text": " can witness this kind of resistance increasing."}, {"start": 114.56, "end": 119.44, "text": " Here you can see a cross-section of the bread, which shows the amount of heat everywhere."}, {"start": 119.44, "end": 124.16, "text": " This not only teaches us why crust forms on the outside layer, but you can see how the"}, {"start": 124.16, "end": 128.56, "text": " amount of heat diffuses slowly into the inside."}, {"start": 128.56, "end": 131.12, "text": " This is a maxed-out paper."}, {"start": 131.12, "end": 136.64000000000001, "text": " By this, I mean the execution quality is through the roof, and the paper is considered done"}, {"start": 136.64000000000001, "end": 141.68, "text": " not when it looks alright, but when the idea is being pushed to the limit, and the work"}, {"start": 141.68, "end": 145.96, "text": " is as good as it can be without trivial ways to improve it."}, {"start": 145.96, "end": 148.84, "text": " And the results are absolute witchcraft."}, {"start": 148.84, "end": 150.96, "text": " Huge congratulations to the authors."}, {"start": 150.96, "end": 156.88, "text": " In fact, double congratulations because it seems to me that this is only the second paper"}, {"start": 156.88, "end": 162.4, "text": " of Manguondink, the lead author, and it has been accepted to the SIGRAPH Asia conference,"}, {"start": 162.4, "end": 167.08, "text": " which is one of the greatest achievements a computer graphics researcher can dream of."}, {"start": 167.08, "end": 170.68, "text": " The paper of such quality for the second try."}, {"start": 170.68, "end": 171.68, "text": " Wow!"}, {"start": 171.68, "end": 175.28, "text": " This episode has been supported by weights and biases."}, {"start": 175.28, "end": 179.92000000000002, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 179.92000000000002, "end": 185.32, "text": " It can save you a ton of time and money in these projects and is being used by OpenAI"}, {"start": 185.32, "end": 188.4, "text": " to your research, Stanford, and Berkeley."}, {"start": 188.4, "end": 190.56, "text": " They have excellent tutorial videos."}, {"start": 190.56, "end": 196.72, "text": " In this one, the CEO himself teaches you how to build your own neural network and more."}, {"start": 196.72, "end": 204.68, "text": " Make sure to visit them through www.wndb.com slash papers or just click the link in the video"}, {"start": 204.68, "end": 207.56, "text": " description and you can get a free demo today."}, {"start": 207.56, "end": 211.44, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 211.44, "end": 241.4, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=hYV4-m7_SK8
MuZero: DeepMind’s New AI Mastered More Than 50 Games
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model" is available here: https://arxiv.org/abs/1911.08265 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1215079/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Kato Jornai-Fehir. Some papers come with an intense media campaign and a lot of nice videos and some other amazing papers are at the risk of slipping under the radar because of the lack of such a media presence. This new work from DeepMind is indeed absolutely amazing, you'll see in a moment why and is not really talked about. So in this video, let's try to reward such a work. In many episodes you get ice cream for your eyes, but today you get ice cream for your mind. Buckle up. In the last few years, we have seen DeepMind's AI defeat the best goal players in the world and after open AI's venture in the game of Dota 2, DeepMind embarked on a journey to defeat pro players in Starcraft 2, a real-time strategy game. This is a game that requires a great deal of mechanical skill, split second decision-making, and we have imperfect information as we only see what our units can see. A nightmare situation for any AI. You see some footage of its previous games here on the screen. And in my opinion, people seem to pay too much attention to how good a given algorithm performs and too little to how general it is. Let me explain. DeepMind has developed a new technique that tries to rely more on its predictions of the future and generalizes to many, many more games than previous techniques. This includes Alpha Zero, a previous technique also from them that was able to play Go, Chess, and Japanese Chess or Shogi as well, and beat any human player at these games confidently. This new method is so general that it does as well as Alpha Zero at these games, however, it can also play a wide variety of Atari games as well. And that is the key here. Writing an algorithm that plays Chess well has been a possibility for decades. For instance, if you wish to know more, make sure to check out Stockfish, which is an incredible open source project and a very potent algorithm. However, Stockfish cannot play anything else. Whenever we look at a new game, we have to derive a new algorithm that solves it. Not so much with these learning methods that can generalize to a wide variety of games. This is why I would like to argue that the generalization capability of these AIs is just as important as their performance. In other words, if there was a narrow algorithm that is the best possible chess algorithm that ever existed, or a somewhat below world champion level AIs that can play any game we can possibly imagine, I would take the letter in a heartbeat. Now, speaking about generalization, let's see how well it does at these Atari games. Shall we? After 30 minutes of time on each game, it significantly outperforms humans on nearly all of these games, the percentages show you here what kind of outperformance we are talking about. In many cases, the algorithm outperforms us several times and up to several hundred times. Absolutely incredible. As you see, it has a more than formidable score on almost all of these games and therefore it generalizes quite well. I'll tell you in a moment about the games it falters at, but for now, let's compare it to three other competing algorithms. You see one ball number per row, which always highlights the best performing algorithm for your convenience. The new technique beats the others on about 66% of the games, including the recurrent experience replay technique in short R2D2. Yes, this is another one of those crazy paper names. And even when it falls short, it is typically very close. As a reference, humans triumphed on less than 10% of the games. We still have a big fat zero on pitfall and the Montezuma's revenge games. So why is that? Well, these games require long-term planning, which is one of the most difficult cases for reinforcement learning algorithms. In an earlier episode, we discussed how we can infuse an AI agent with curiosity to go out there and explore some more with success. However, note that these algorithms are more narrow than the one we've been talking about today. So there is still plenty of work to be done, but I hope you see that this is incredibly nimble progress on AI research. Bravo deep-mind. What a time to be alive. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. They offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing, and computer graphics projects. Exactly the kind of works you see here in this series. If you feel inspired by these works and you wish to run your own experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. To spin up your own GPU instance and receive a $20 free credit, visit linode.com slash papers or click the link in the description and use the promo code Papers20 during SINAM. Give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Kato Jornai-Fehir."}, {"start": 4.5600000000000005, "end": 10.28, "text": " Some papers come with an intense media campaign and a lot of nice videos and some other amazing"}, {"start": 10.28, "end": 15.26, "text": " papers are at the risk of slipping under the radar because of the lack of such a media"}, {"start": 15.26, "end": 16.26, "text": " presence."}, {"start": 16.26, "end": 21.84, "text": " This new work from DeepMind is indeed absolutely amazing, you'll see in a moment why and is"}, {"start": 21.84, "end": 23.52, "text": " not really talked about."}, {"start": 23.52, "end": 26.68, "text": " So in this video, let's try to reward such a work."}, {"start": 26.68, "end": 31.52, "text": " In many episodes you get ice cream for your eyes, but today you get ice cream for your"}, {"start": 31.52, "end": 32.519999999999996, "text": " mind."}, {"start": 32.519999999999996, "end": 33.519999999999996, "text": " Buckle up."}, {"start": 33.519999999999996, "end": 38.36, "text": " In the last few years, we have seen DeepMind's AI defeat the best goal players in the world"}, {"start": 38.36, "end": 43.8, "text": " and after open AI's venture in the game of Dota 2, DeepMind embarked on a journey to"}, {"start": 43.8, "end": 48.16, "text": " defeat pro players in Starcraft 2, a real-time strategy game."}, {"start": 48.16, "end": 53.28, "text": " This is a game that requires a great deal of mechanical skill, split second decision-making,"}, {"start": 53.28, "end": 58.24, "text": " and we have imperfect information as we only see what our units can see."}, {"start": 58.24, "end": 60.72, "text": " A nightmare situation for any AI."}, {"start": 60.72, "end": 64.36, "text": " You see some footage of its previous games here on the screen."}, {"start": 64.36, "end": 69.68, "text": " And in my opinion, people seem to pay too much attention to how good a given algorithm"}, {"start": 69.68, "end": 73.76, "text": " performs and too little to how general it is."}, {"start": 73.76, "end": 74.76, "text": " Let me explain."}, {"start": 74.76, "end": 79.24000000000001, "text": " DeepMind has developed a new technique that tries to rely more on its predictions of"}, {"start": 79.24, "end": 84.36, "text": " the future and generalizes to many, many more games than previous techniques."}, {"start": 84.36, "end": 90.19999999999999, "text": " This includes Alpha Zero, a previous technique also from them that was able to play Go,"}, {"start": 90.19999999999999, "end": 96.96, "text": " Chess, and Japanese Chess or Shogi as well, and beat any human player at these games confidently."}, {"start": 96.96, "end": 103.11999999999999, "text": " This new method is so general that it does as well as Alpha Zero at these games, however,"}, {"start": 103.11999999999999, "end": 106.96, "text": " it can also play a wide variety of Atari games as well."}, {"start": 106.96, "end": 112.36, "text": " And that is the key here. Writing an algorithm that plays Chess well has been a possibility"}, {"start": 112.36, "end": 113.6, "text": " for decades."}, {"start": 113.6, "end": 118.47999999999999, "text": " For instance, if you wish to know more, make sure to check out Stockfish, which is an incredible"}, {"start": 118.47999999999999, "end": 121.72, "text": " open source project and a very potent algorithm."}, {"start": 121.72, "end": 125.19999999999999, "text": " However, Stockfish cannot play anything else."}, {"start": 125.19999999999999, "end": 130.24, "text": " Whenever we look at a new game, we have to derive a new algorithm that solves it."}, {"start": 130.24, "end": 135.35999999999999, "text": " Not so much with these learning methods that can generalize to a wide variety of games."}, {"start": 135.36, "end": 141.12, "text": " This is why I would like to argue that the generalization capability of these AIs is just as"}, {"start": 141.12, "end": 143.56, "text": " important as their performance."}, {"start": 143.56, "end": 148.44000000000003, "text": " In other words, if there was a narrow algorithm that is the best possible chess algorithm"}, {"start": 148.44000000000003, "end": 154.88000000000002, "text": " that ever existed, or a somewhat below world champion level AIs that can play any game"}, {"start": 154.88000000000002, "end": 158.92000000000002, "text": " we can possibly imagine, I would take the letter in a heartbeat."}, {"start": 158.92000000000002, "end": 164.48000000000002, "text": " Now, speaking about generalization, let's see how well it does at these Atari games."}, {"start": 164.48, "end": 165.67999999999998, "text": " Shall we?"}, {"start": 165.67999999999998, "end": 172.67999999999998, "text": " After 30 minutes of time on each game, it significantly outperforms humans on nearly all of these games,"}, {"start": 172.67999999999998, "end": 177.48, "text": " the percentages show you here what kind of outperformance we are talking about."}, {"start": 177.48, "end": 184.95999999999998, "text": " In many cases, the algorithm outperforms us several times and up to several hundred times."}, {"start": 184.95999999999998, "end": 186.6, "text": " Absolutely incredible."}, {"start": 186.6, "end": 192.04, "text": " As you see, it has a more than formidable score on almost all of these games and therefore"}, {"start": 192.04, "end": 194.2, "text": " it generalizes quite well."}, {"start": 194.2, "end": 198.72, "text": " I'll tell you in a moment about the games it falters at, but for now, let's compare it"}, {"start": 198.72, "end": 201.51999999999998, "text": " to three other competing algorithms."}, {"start": 201.51999999999998, "end": 206.72, "text": " You see one ball number per row, which always highlights the best performing algorithm for"}, {"start": 206.72, "end": 208.11999999999998, "text": " your convenience."}, {"start": 208.11999999999998, "end": 213.6, "text": " The new technique beats the others on about 66% of the games, including the recurrent"}, {"start": 213.6, "end": 217.0, "text": " experience replay technique in short R2D2."}, {"start": 217.0, "end": 221.0, "text": " Yes, this is another one of those crazy paper names."}, {"start": 221.0, "end": 225.04, "text": " And even when it falls short, it is typically very close."}, {"start": 225.04, "end": 229.52, "text": " As a reference, humans triumphed on less than 10% of the games."}, {"start": 229.52, "end": 234.2, "text": " We still have a big fat zero on pitfall and the Montezuma's revenge games."}, {"start": 234.2, "end": 235.96, "text": " So why is that?"}, {"start": 235.96, "end": 241.0, "text": " Well, these games require long-term planning, which is one of the most difficult cases"}, {"start": 241.0, "end": 243.24, "text": " for reinforcement learning algorithms."}, {"start": 243.24, "end": 249.56, "text": " In an earlier episode, we discussed how we can infuse an AI agent with curiosity to go"}, {"start": 249.56, "end": 252.84, "text": " out there and explore some more with success."}, {"start": 252.84, "end": 257.92, "text": " However, note that these algorithms are more narrow than the one we've been talking about"}, {"start": 257.92, "end": 258.92, "text": " today."}, {"start": 258.92, "end": 263.64, "text": " So there is still plenty of work to be done, but I hope you see that this is incredibly"}, {"start": 263.64, "end": 266.32, "text": " nimble progress on AI research."}, {"start": 266.32, "end": 267.64, "text": " Bravo deep-mind."}, {"start": 267.64, "end": 269.52, "text": " What a time to be alive."}, {"start": 269.52, "end": 272.0, "text": " This episode has been supported by Linode."}, {"start": 272.0, "end": 275.88, "text": " Linode is the world's largest independent cloud computing provider."}, {"start": 275.88, "end": 282.2, "text": " They offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for"}, {"start": 282.2, "end": 286.64, "text": " AI, scientific computing, and computer graphics projects."}, {"start": 286.64, "end": 289.52, "text": " Exactly the kind of works you see here in this series."}, {"start": 289.52, "end": 294.56, "text": " If you feel inspired by these works and you wish to run your own experiments or deploy"}, {"start": 294.56, "end": 299.8, "text": " your already existing works through a simple and reliable hosting service, make sure to join"}, {"start": 299.8, "end": 304.64, "text": " over 800,000 other happy customers and choose Linode."}, {"start": 304.64, "end": 310.96, "text": " To spin up your own GPU instance and receive a $20 free credit, visit linode.com slash"}, {"start": 310.96, "end": 316.96, "text": " papers or click the link in the description and use the promo code Papers20 during SINAM."}, {"start": 316.96, "end": 318.28, "text": " Give it a try today."}, {"start": 318.28, "end": 322.88, "text": " Our thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 322.88, "end": 334.96, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=O8l4Kn-j-5M
This Robot Arm Learned To Assemble Objects It Hasn’t Seen Before
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Form2Fit: Learning Shape Priors for Generalizable Assembly from Disassembly" is available here: https://form2fit.github.io/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolna Efehir. Have a look and marvel at this learning-based assembler robot that is able to put together simple contraptions. Since this is a neural network-based learning method, it needs to be trained to be able to do this. So, how is it trained? Normally, to train such an algorithm, we would have to show it a lot of pairs of the same contraption and tell it that this is what it looks like when it's disassembled and what you see here is the same thing assembled. If we did this, this method would be called supervised learning. This would be very time-consuming and potentially expensive as it would require the presence of a human as well. A more convenient way would be to go for unsupervised learning where we just chuck a lot of things on the table and say, well, robot, you figure it out. However, this would be very inefficient if at all possible because we would have to provide it many, many contraptions that wouldn't fit on the table. But this paper went for none of these solutions as they opted for a really smart, self-supervised technique. So what does that mean? Well, first, we give the robot an assembled contraption and ask it to disassemble it. And therein lies the really cool idea because disassembling it is easier and by rewinding the process, it also gets to know how to assemble it later. And the training process takes place by assembling, disassembling, and doing it over and over again several hundred times per object. Isn't this amazing? Love it. However, what is the point of all this? Instead, we could just add explicit instructions to a non-learning based robot to assemble the objects. Why not just do that? And the answer lies in one of the most important aspects within machine learning generalization. If we program a robot to be able to assemble one thing, it will be able to do exactly that, assemble one thing. And whenever we have a new contraption on our hands, we need to reprogram it. However, with this technique, after the learning process took place, we will be able to give it a new previously unseen object and it will have a chance to assemble it. This requires intelligence to perform. So how good is it at generalization? Well, get this, the paper reports that when showing it new objects, it was able to successfully assemble new previously unseen contraptions 86% of the time. Incredible. So, what about the limitations? This technique works on a 2D planar surface, for instance, this table, and while it is able to insert most of these parts vertically, it does not deal well with more complex assemblies that require inserting screws and pegs in a 45 degree angle. As we always say, two more papers down the line and this will likely be improved significantly. If you have ever bought a new bed or a cupboard and said, well, it just looks like a block. How hard can it be to assemble? Wait, does this thing have more than 100 screws and pegs? I wonder why. And then, 4.5 hours later, you'll find out yourself. I hope, techniques like these will help us save time by enabling us to buy many of these contraptions preassembled and it can be used for much, much more. What a time to be alive. This episode has been supported by Lambda. If you're a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they're offering GPU cloud services as well. The Lambda GPU cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com slash papers and sign up for one of their amazing GPU instances today. Thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolna Efehir."}, {"start": 4.48, "end": 9.84, "text": " Have a look and marvel at this learning-based assembler robot that is able to put together"}, {"start": 9.84, "end": 11.56, "text": " simple contraptions."}, {"start": 11.56, "end": 15.6, "text": " Since this is a neural network-based learning method, it needs to be trained to be able"}, {"start": 15.6, "end": 16.6, "text": " to do this."}, {"start": 16.6, "end": 18.92, "text": " So, how is it trained?"}, {"start": 18.92, "end": 23.64, "text": " Normally, to train such an algorithm, we would have to show it a lot of pairs of the same"}, {"start": 23.64, "end": 28.0, "text": " contraption and tell it that this is what it looks like when it's disassembled and what"}, {"start": 28.0, "end": 31.32, "text": " you see here is the same thing assembled."}, {"start": 31.32, "end": 35.44, "text": " If we did this, this method would be called supervised learning."}, {"start": 35.44, "end": 40.4, "text": " This would be very time-consuming and potentially expensive as it would require the presence"}, {"start": 40.4, "end": 41.8, "text": " of a human as well."}, {"start": 41.8, "end": 47.32, "text": " A more convenient way would be to go for unsupervised learning where we just chuck a lot of things"}, {"start": 47.32, "end": 52.0, "text": " on the table and say, well, robot, you figure it out."}, {"start": 52.0, "end": 56.760000000000005, "text": " However, this would be very inefficient if at all possible because we would have to"}, {"start": 56.76, "end": 60.92, "text": " provide it many, many contraptions that wouldn't fit on the table."}, {"start": 60.92, "end": 66.16, "text": " But this paper went for none of these solutions as they opted for a really smart, self-supervised"}, {"start": 66.16, "end": 67.16, "text": " technique."}, {"start": 67.16, "end": 68.56, "text": " So what does that mean?"}, {"start": 68.56, "end": 74.92, "text": " Well, first, we give the robot an assembled contraption and ask it to disassemble it."}, {"start": 74.92, "end": 80.8, "text": " And therein lies the really cool idea because disassembling it is easier and by rewinding"}, {"start": 80.8, "end": 85.28, "text": " the process, it also gets to know how to assemble it later."}, {"start": 85.28, "end": 90.96000000000001, "text": " And the training process takes place by assembling, disassembling, and doing it over and over"}, {"start": 90.96000000000001, "end": 94.12, "text": " again several hundred times per object."}, {"start": 94.12, "end": 95.88, "text": " Isn't this amazing?"}, {"start": 95.88, "end": 96.88, "text": " Love it."}, {"start": 96.88, "end": 99.6, "text": " However, what is the point of all this?"}, {"start": 99.6, "end": 104.88, "text": " Instead, we could just add explicit instructions to a non-learning based robot to assemble the"}, {"start": 104.88, "end": 105.88, "text": " objects."}, {"start": 105.88, "end": 107.8, "text": " Why not just do that?"}, {"start": 107.8, "end": 113.56, "text": " And the answer lies in one of the most important aspects within machine learning generalization."}, {"start": 113.56, "end": 119.72, "text": " If we program a robot to be able to assemble one thing, it will be able to do exactly that,"}, {"start": 119.72, "end": 121.60000000000001, "text": " assemble one thing."}, {"start": 121.60000000000001, "end": 125.92, "text": " And whenever we have a new contraption on our hands, we need to reprogram it."}, {"start": 125.92, "end": 130.68, "text": " However, with this technique, after the learning process took place, we will be able to give"}, {"start": 130.68, "end": 135.64000000000001, "text": " it a new previously unseen object and it will have a chance to assemble it."}, {"start": 135.64000000000001, "end": 138.24, "text": " This requires intelligence to perform."}, {"start": 138.24, "end": 140.64000000000001, "text": " So how good is it at generalization?"}, {"start": 140.64, "end": 146.83999999999997, "text": " Well, get this, the paper reports that when showing it new objects, it was able to successfully"}, {"start": 146.83999999999997, "end": 152.16, "text": " assemble new previously unseen contraptions 86% of the time."}, {"start": 152.16, "end": 153.16, "text": " Incredible."}, {"start": 153.16, "end": 155.64, "text": " So, what about the limitations?"}, {"start": 155.64, "end": 160.88, "text": " This technique works on a 2D planar surface, for instance, this table, and while it is"}, {"start": 160.88, "end": 166.79999999999998, "text": " able to insert most of these parts vertically, it does not deal well with more complex assemblies"}, {"start": 166.8, "end": 171.4, "text": " that require inserting screws and pegs in a 45 degree angle."}, {"start": 171.4, "end": 177.12, "text": " As we always say, two more papers down the line and this will likely be improved significantly."}, {"start": 177.12, "end": 183.0, "text": " If you have ever bought a new bed or a cupboard and said, well, it just looks like a block."}, {"start": 183.0, "end": 185.08, "text": " How hard can it be to assemble?"}, {"start": 185.08, "end": 189.20000000000002, "text": " Wait, does this thing have more than 100 screws and pegs?"}, {"start": 189.20000000000002, "end": 190.56, "text": " I wonder why."}, {"start": 190.56, "end": 194.36, "text": " And then, 4.5 hours later, you'll find out yourself."}, {"start": 194.36, "end": 199.48000000000002, "text": " I hope, techniques like these will help us save time by enabling us to buy many of these"}, {"start": 199.48000000000002, "end": 204.56, "text": " contraptions preassembled and it can be used for much, much more."}, {"start": 204.56, "end": 206.44000000000003, "text": " What a time to be alive."}, {"start": 206.44000000000003, "end": 208.8, "text": " This episode has been supported by Lambda."}, {"start": 208.8, "end": 213.92000000000002, "text": " If you're a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 213.92000000000002, "end": 216.4, "text": " check out Lambda GPU Cloud."}, {"start": 216.4, "end": 220.96, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 220.96, "end": 224.16000000000003, "text": " that they're offering GPU cloud services as well."}, {"start": 224.16, "end": 231.48, "text": " The Lambda GPU cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 231.48, "end": 236.4, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 236.4, "end": 241.96, "text": " And finally, hold onto your papers because the Lambda GPU cloud costs less than half of"}, {"start": 241.96, "end": 244.35999999999999, "text": " AWS and Azure."}, {"start": 244.35999999999999, "end": 249.44, "text": " Make sure to go to LambdaLabs.com slash papers and sign up for one of their amazing GPU"}, {"start": 249.44, "end": 250.92, "text": " instances today."}, {"start": 250.92, "end": 254.51999999999998, "text": " Thanks to Lambda for helping us make better videos for you."}, {"start": 254.52, "end": 284.08000000000004, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=cpxtd-FKY1Y
These Natural Images Fool Neural Networks (And Maybe You Too)
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Their blog post on training a neural network is available here: https://www.wandb.com/articles/mnist 📝 The paper "Natural Adversarial Examples" and its dataset are available here: https://arxiv.org/abs/1907.07174 https://github.com/hendrycks/natural-adv-examples Andrej Karpathy's image classifier: https://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html You can also join us here to get early access to these videos: https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-4344997/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. In the last few years, Neural Network based learning algorithms became so good at image recognition tasks that they can often rival and sometimes even outperform humans in these endeavors. Beyond making these Neural Networks even more accurate in these tasks, interestingly, there are plenty of research works on how to attack and mislead these Neural Networks. I think this area of research is extremely exciting and I'll now try to show you why. One of the first examples of an adversarial attack can be performed as follows. We present such a classifier with an image of a bus and it will successfully tell us that yes, this is indeed a bus. Nothing too crazy here. Now we show it not an image of a bus, but a bus plus some carefully crafted noise that is barely perceptible that forces the Neural Network to misclassify it as an ostrich. I will stress that this is not any kind of noise, but the kind of noise that exploits biases in the Neural Network which is by no means easy or trivial to craft. However, if we succeed at that, this kind of adversarial attack can be pulled off on many different kinds of images. Everything that you see here on the right will be classified as an ostrich by the Neural Network these noise patterns were crafted for. In a later work, researchers of the Google Brain team found that we can not only coerce the Neural Network into making some mistake, but we can even force it to make exactly the kind of mistake we want. This example here reprograms an image classifier to count the number of squares in our images. However, interestingly, some adversarial attacks do not need carefully crafted noise or any tricks for that matter. Did you know that many of them occur naturally in nature? This new work contains a brutally hard data set with such images that throw off even the best neural image recognition systems. Let's have a look at an example. If I were the Neural Network, I would look at this Queryl and claim that with high confidence I can tell you that this is a sea lion. And you human may think that this is a dragonfly, but you would be wrong. I'm pretty sure that this is a manhole cover. Well, except that it's not. The paper shows many of these examples, some of which don't really occur in my brain. For instance, I don't see this mushroom as a pretzel at all, but there was something about that dragonfly that upon a cursory look may get registered as a manhole cover. If you look quickly, you see a squirrel here, just kidding. It's a bullfrog. I feel that if I look at some of these with a fresh eye, sometimes I get a similar impression as the Neural Network. I'll put up a bunch of more examples for you here. Let me know in the comments which are the ones that got you. Very cool project. I love it. What's even better, this data set by the name ImageNet A is now available for everyone free of charge. And if you remember, at the start of the video, I said that it is brutally hard for Neural Networks to identify what is going on here. So what kind of success rates can we expect? 70%, maybe 50%, nope, 2%. Wow. In a world where some of these learning-based image classifiers are better than us at some data sets, they are vastly outclassed by us humans on these natural adversarial examples. If you have a look at the paper, you will see that the currently known techniques to improve the robustness of training show little to no improvement on this. I cannot wait to see some follow-up papers on how to correct this nut. We can learn so much from this paper and we likely learn even more from these follow-up works. Make sure to subscribe and also hit the bell icon to never miss future episodes. What a time to be alive. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI Toyota Research, Stanford and Berkeley. In this post, they show you how to train a state-of-the-art machine learning model with over 99% accuracy on classifying quickly handwritten numbers and how to use their tools to get a crystal clear understanding of what your model exactly does and what part of the letters it is looking at. Make sure to visit them through whendb.com slash papers, www.wanddb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.36, "end": 9.98, "text": " In the last few years, Neural Network based learning algorithms became so good at image recognition"}, {"start": 9.98, "end": 16.7, "text": " tasks that they can often rival and sometimes even outperform humans in these endeavors."}, {"start": 16.7, "end": 21.580000000000002, "text": " Beyond making these Neural Networks even more accurate in these tasks, interestingly, there"}, {"start": 21.580000000000002, "end": 26.6, "text": " are plenty of research works on how to attack and mislead these Neural Networks."}, {"start": 26.6, "end": 32.32, "text": " I think this area of research is extremely exciting and I'll now try to show you why."}, {"start": 32.32, "end": 36.92, "text": " One of the first examples of an adversarial attack can be performed as follows."}, {"start": 36.92, "end": 42.32, "text": " We present such a classifier with an image of a bus and it will successfully tell us that"}, {"start": 42.32, "end": 45.36, "text": " yes, this is indeed a bus."}, {"start": 45.36, "end": 46.92, "text": " Nothing too crazy here."}, {"start": 46.92, "end": 53.08, "text": " Now we show it not an image of a bus, but a bus plus some carefully crafted noise that"}, {"start": 53.08, "end": 59.04, "text": " is barely perceptible that forces the Neural Network to misclassify it as an ostrich."}, {"start": 59.04, "end": 63.72, "text": " I will stress that this is not any kind of noise, but the kind of noise that exploits"}, {"start": 63.72, "end": 69.64, "text": " biases in the Neural Network which is by no means easy or trivial to craft."}, {"start": 69.64, "end": 74.8, "text": " However, if we succeed at that, this kind of adversarial attack can be pulled off on"}, {"start": 74.8, "end": 77.2, "text": " many different kinds of images."}, {"start": 77.2, "end": 81.2, "text": " Everything that you see here on the right will be classified as an ostrich by the Neural"}, {"start": 81.2, "end": 84.24000000000001, "text": " Network these noise patterns were crafted for."}, {"start": 84.24000000000001, "end": 89.2, "text": " In a later work, researchers of the Google Brain team found that we can not only coerce"}, {"start": 89.2, "end": 94.68, "text": " the Neural Network into making some mistake, but we can even force it to make exactly the"}, {"start": 94.68, "end": 97.0, "text": " kind of mistake we want."}, {"start": 97.0, "end": 103.16, "text": " This example here reprograms an image classifier to count the number of squares in our images."}, {"start": 103.16, "end": 109.4, "text": " However, interestingly, some adversarial attacks do not need carefully crafted noise or any"}, {"start": 109.4, "end": 111.16, "text": " tricks for that matter."}, {"start": 111.16, "end": 115.67999999999999, "text": " Did you know that many of them occur naturally in nature?"}, {"start": 115.67999999999999, "end": 121.36, "text": " This new work contains a brutally hard data set with such images that throw off even the"}, {"start": 121.36, "end": 123.92, "text": " best neural image recognition systems."}, {"start": 123.92, "end": 126.03999999999999, "text": " Let's have a look at an example."}, {"start": 126.03999999999999, "end": 131.8, "text": " If I were the Neural Network, I would look at this Queryl and claim that with high confidence"}, {"start": 131.8, "end": 135.28, "text": " I can tell you that this is a sea lion."}, {"start": 135.28, "end": 139.6, "text": " And you human may think that this is a dragonfly, but you would be wrong."}, {"start": 139.6, "end": 142.92, "text": " I'm pretty sure that this is a manhole cover."}, {"start": 142.92, "end": 145.16, "text": " Well, except that it's not."}, {"start": 145.16, "end": 150.76, "text": " The paper shows many of these examples, some of which don't really occur in my brain."}, {"start": 150.76, "end": 155.51999999999998, "text": " For instance, I don't see this mushroom as a pretzel at all, but there was something"}, {"start": 155.51999999999998, "end": 162.32, "text": " about that dragonfly that upon a cursory look may get registered as a manhole cover."}, {"start": 162.32, "end": 166.04, "text": " If you look quickly, you see a squirrel here, just kidding."}, {"start": 166.04, "end": 167.04, "text": " It's a bullfrog."}, {"start": 167.04, "end": 172.23999999999998, "text": " I feel that if I look at some of these with a fresh eye, sometimes I get a similar impression"}, {"start": 172.23999999999998, "end": 173.23999999999998, "text": " as the Neural Network."}, {"start": 173.23999999999998, "end": 176.44, "text": " I'll put up a bunch of more examples for you here."}, {"start": 176.44, "end": 179.95999999999998, "text": " Let me know in the comments which are the ones that got you."}, {"start": 179.95999999999998, "end": 180.95999999999998, "text": " Very cool project."}, {"start": 180.95999999999998, "end": 181.95999999999998, "text": " I love it."}, {"start": 181.95999999999998, "end": 187.72, "text": " What's even better, this data set by the name ImageNet A is now available for everyone"}, {"start": 187.72, "end": 189.2, "text": " free of charge."}, {"start": 189.2, "end": 193.72, "text": " And if you remember, at the start of the video, I said that it is brutally hard for Neural"}, {"start": 193.72, "end": 196.95999999999998, "text": " Networks to identify what is going on here."}, {"start": 196.96, "end": 200.4, "text": " So what kind of success rates can we expect?"}, {"start": 200.4, "end": 205.28, "text": " 70%, maybe 50%, nope, 2%."}, {"start": 205.28, "end": 207.28, "text": " Wow."}, {"start": 207.28, "end": 211.32, "text": " In a world where some of these learning-based image classifiers are better than us at"}, {"start": 211.32, "end": 216.8, "text": " some data sets, they are vastly outclassed by us humans on these natural adversarial"}, {"start": 216.8, "end": 218.04000000000002, "text": " examples."}, {"start": 218.04000000000002, "end": 222.12, "text": " If you have a look at the paper, you will see that the currently known techniques to improve"}, {"start": 222.12, "end": 226.48000000000002, "text": " the robustness of training show little to no improvement on this."}, {"start": 226.48, "end": 230.67999999999998, "text": " I cannot wait to see some follow-up papers on how to correct this nut."}, {"start": 230.67999999999998, "end": 235.28, "text": " We can learn so much from this paper and we likely learn even more from these follow-up"}, {"start": 235.28, "end": 236.28, "text": " works."}, {"start": 236.28, "end": 240.72, "text": " Make sure to subscribe and also hit the bell icon to never miss future episodes."}, {"start": 240.72, "end": 242.51999999999998, "text": " What a time to be alive."}, {"start": 242.51999999999998, "end": 245.76, "text": " This episode has been supported by weights and biases."}, {"start": 245.76, "end": 250.48, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 250.48, "end": 255.79999999999998, "text": " It can save you a ton of time and money in these projects and is being used by OpenAI"}, {"start": 255.8, "end": 258.92, "text": " Toyota Research, Stanford and Berkeley."}, {"start": 258.92, "end": 263.2, "text": " In this post, they show you how to train a state-of-the-art machine learning model with"}, {"start": 263.2, "end": 269.48, "text": " over 99% accuracy on classifying quickly handwritten numbers and how to use their tools"}, {"start": 269.48, "end": 274.56, "text": " to get a crystal clear understanding of what your model exactly does and what part of the"}, {"start": 274.56, "end": 276.28000000000003, "text": " letters it is looking at."}, {"start": 276.28000000000003, "end": 283.48, "text": " Make sure to visit them through whendb.com slash papers, www.wanddb.com slash papers or just"}, {"start": 283.48, "end": 287.8, "text": " click the link in the video description and you can get a free demo today."}, {"start": 287.8, "end": 291.44, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 291.44, "end": 321.4, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_s7Bg6yVOdo
OpenAI Safety Gym: A Safe Place For AIs To Learn 💪
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "Benchmarking Safe Exploration in Deep Reinforcement Learning" is available here: https://openai.com/blog/safety-gym/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Reinforcement learning is a technique in the field of machine learning to learn how to navigate an elaborate, play a video game, or to teach a digital creature to walk. Usually, we are interested in a series of actions that are in some sense optimal in a given environment. Despite the fact that many enormous tomes exist to discuss the mathematical details, the intuition behind the algorithm itself is remarkably simple. Choose an action, and if you get rewarded for it, try to find out which series of actions led to this and keep doing it. If the rewards are not coming, try something else. The reward can be, for instance, our score in a computer game or how far our digital creature could walk. Approximately 300 episodes ago, OpenAI published one of their first major works by the name Jim, where anyone could submit their solutions and compete against each other on the same games. It was like Disney World for reinforcement learning researchers. A moment ago, I noted that in reinforcement learning, if the rewards are not coming, we have to try something else. Hmm, is that so? Because there are cases where trying crazy new actions is downright dangerous. For instance, imagine that during the training of this robot arm, initially, it would try random actions and start flailing about where it made damage itself some other equipment or even worse, humans may come to harm. Here you see an amusing example of DeepMind's reinforcement learning agent from 2017 that like to engage in similar flailing activities. So, what could be a possible solution for this? Well, have a look at this new work from OpenAI by the name Safety Jim. In this paper, they introduce what they call the Constraint Re-Enforcement Learning Formulation in which these agents can be discouraged from performing actions that are deemed potentially dangerous in an environment. You can see an example here where the AI has to navigate through these environments and achieve a task such as reaching the green goal signs, push buttons, or move a box around to a prescribed position. The Constraint part comes in whenever some sort of safety violation happens, which are in this environment collisions with the boxes or blue regions. All of these events are highlighted with this red sphere and the good learning algorithm should be instructed to try to avoid these. The goal of this project is that in the future, for reinforcement learning algorithms, not only the efficiency, but the safety scores should also be measured. This way, a self-driving AI would be incentivized to not only drive recklessly to the finish line, but respect our safety standards along the journey as well. While noting that clearly self-driving cars may be achieved with other kinds of algorithms, many of which have been in the works for years, there are many additional applications for this work. For instance, the paper discusses the case of incentivizing recommender systems to not show psychologically harmful content to its users, or to make sure that a medical question-answering system does not mislead us with false information. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. They offer you virtual servers that make it easy and affordable to host your own app, site, project, or anything else in the cloud. Whether you are a Linodex expert or just starting to tinker with your own code, Linode will be useful for you. A few episodes ago, we played with an implementation of OpenAI's GPT2, where our excited viewers accidentally overloaded the system. With Linode's load balancing technology and instances ranging from shared nanodes, all the way up to dedicated GPUs, you don't have to worry about your project being overloaded. To get $20 of free credit, make sure to head over to Linode.com slash papers and sign up today using the promo code Papers20. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.0, "end": 9.44, "text": " Reinforcement learning is a technique in the field of machine learning to learn how to navigate an"}, {"start": 9.44, "end": 16.080000000000002, "text": " elaborate, play a video game, or to teach a digital creature to walk. Usually, we are interested"}, {"start": 16.080000000000002, "end": 22.56, "text": " in a series of actions that are in some sense optimal in a given environment. Despite the fact"}, {"start": 22.56, "end": 28.32, "text": " that many enormous tomes exist to discuss the mathematical details, the intuition behind the"}, {"start": 28.32, "end": 34.32, "text": " algorithm itself is remarkably simple. Choose an action, and if you get rewarded for it, try to"}, {"start": 34.32, "end": 40.480000000000004, "text": " find out which series of actions led to this and keep doing it. If the rewards are not coming,"}, {"start": 40.480000000000004, "end": 47.120000000000005, "text": " try something else. The reward can be, for instance, our score in a computer game or how far our"}, {"start": 47.120000000000005, "end": 53.519999999999996, "text": " digital creature could walk. Approximately 300 episodes ago, OpenAI published one of their first"}, {"start": 53.52, "end": 59.2, "text": " major works by the name Jim, where anyone could submit their solutions and compete against each"}, {"start": 59.2, "end": 64.88, "text": " other on the same games. It was like Disney World for reinforcement learning researchers."}, {"start": 64.88, "end": 70.16, "text": " A moment ago, I noted that in reinforcement learning, if the rewards are not coming,"}, {"start": 70.16, "end": 77.12, "text": " we have to try something else. Hmm, is that so? Because there are cases where trying crazy"}, {"start": 77.12, "end": 82.80000000000001, "text": " new actions is downright dangerous. For instance, imagine that during the training of this robot"}, {"start": 82.8, "end": 89.67999999999999, "text": " arm, initially, it would try random actions and start flailing about where it made damage itself"}, {"start": 89.67999999999999, "end": 96.0, "text": " some other equipment or even worse, humans may come to harm. Here you see an amusing example"}, {"start": 96.0, "end": 101.52, "text": " of DeepMind's reinforcement learning agent from 2017 that like to engage in similar flailing"}, {"start": 101.52, "end": 107.75999999999999, "text": " activities. So, what could be a possible solution for this? Well, have a look at this new work from"}, {"start": 107.76, "end": 114.80000000000001, "text": " OpenAI by the name Safety Jim. In this paper, they introduce what they call the Constraint Re-Enforcement"}, {"start": 114.80000000000001, "end": 120.16000000000001, "text": " Learning Formulation in which these agents can be discouraged from performing actions that are"}, {"start": 120.16000000000001, "end": 125.76, "text": " deemed potentially dangerous in an environment. You can see an example here where the AI has to"}, {"start": 125.76, "end": 131.44, "text": " navigate through these environments and achieve a task such as reaching the green goal signs,"}, {"start": 131.44, "end": 138.07999999999998, "text": " push buttons, or move a box around to a prescribed position. The Constraint part comes in whenever"}, {"start": 138.07999999999998, "end": 144.16, "text": " some sort of safety violation happens, which are in this environment collisions with the boxes"}, {"start": 144.16, "end": 149.44, "text": " or blue regions. All of these events are highlighted with this red sphere and the good learning"}, {"start": 149.44, "end": 155.28, "text": " algorithm should be instructed to try to avoid these. The goal of this project is that in the future,"}, {"start": 155.28, "end": 160.88, "text": " for reinforcement learning algorithms, not only the efficiency, but the safety scores should also"}, {"start": 160.88, "end": 166.79999999999998, "text": " be measured. This way, a self-driving AI would be incentivized to not only drive recklessly to the"}, {"start": 166.79999999999998, "end": 173.2, "text": " finish line, but respect our safety standards along the journey as well. While noting that clearly"}, {"start": 173.2, "end": 178.16, "text": " self-driving cars may be achieved with other kinds of algorithms, many of which have been in"}, {"start": 178.16, "end": 183.51999999999998, "text": " the works for years, there are many additional applications for this work. For instance, the paper"}, {"start": 183.51999999999998, "end": 189.44, "text": " discusses the case of incentivizing recommender systems to not show psychologically harmful content"}, {"start": 189.44, "end": 194.96, "text": " to its users, or to make sure that a medical question-answering system does not mislead us with"}, {"start": 194.96, "end": 200.48, "text": " false information. This episode has been supported by Linode. Linode is the world's largest"}, {"start": 200.48, "end": 206.32, "text": " independent cloud computing provider. They offer you virtual servers that make it easy and affordable"}, {"start": 206.32, "end": 212.8, "text": " to host your own app, site, project, or anything else in the cloud. Whether you are a Linodex expert"}, {"start": 212.8, "end": 218.16, "text": " or just starting to tinker with your own code, Linode will be useful for you. A few episodes ago,"}, {"start": 218.16, "end": 224.4, "text": " we played with an implementation of OpenAI's GPT2, where our excited viewers accidentally"}, {"start": 224.4, "end": 230.48, "text": " overloaded the system. With Linode's load balancing technology and instances ranging from shared"}, {"start": 230.48, "end": 235.51999999999998, "text": " nanodes, all the way up to dedicated GPUs, you don't have to worry about your project being"}, {"start": 235.51999999999998, "end": 241.84, "text": " overloaded. To get $20 of free credit, make sure to head over to Linode.com slash papers and"}, {"start": 241.84, "end": 248.0, "text": " sign up today using the promo code Papers20. Our thanks to Linode for supporting the series and"}, {"start": 248.0, "end": 252.08, "text": " helping us make better videos for you. Thanks for watching and for your generous support,"}, {"start": 252.08, "end": 281.2, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Wtz-bNywXBY
We Taught an AI To Synthesize Materials 🔮
📝 Our paper "Photorealistic Material Editing Through Direct Image Manipulation" and its source code are now available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-editing/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #neuralrendering
Creating photorealistic materials for light transport algorithms requires carefully fine-tuning a set of material properties to achieve a desired artistic effect. This is a lengthy process that involves a trained artist with specialized knowledge. In this work, we propose a system that only requires basic image processing knowledge and enables users without photorealistic rendering experience to create high quality materials. This is highly desirable as human thinking is inherently visual and not based on physically based material parameters. In our proposed workflow, all the user needs to do is apply a few intuitive transforms to a source image and in the next step our technique produces the closest photorealistic material that approximates this target image. One of our key observations is that even though this process target image is often not physically achievable, in many cases a photorealistic material model can be found that closely matches this image. Our method generates results in less than 30 seconds and works in the presence of poorly edited target images like the discoloration of the pedestal or the background of the gold material here. This technique is especially useful early in the material design process where the artist seeks to rapidly iterate over a variety of possible artistic effects. We also propose an extension to predict image sequences with a tight budget of 1-2 seconds per image. To achieve this, we propose a simple optimization formulation that is able to produce accurate solutions that takes relatively long due to the lack of a useful initial guess. Our other main observation is that an approximate solution can also be achieved without an optimization step by implementing a simple and coder neural network. The main advantage of this method is that it produces a solution within a few milliseconds with the drawback that the provided solution is only approximate. We refer to this as the inversion technique. Both of these solutions suffer from drawbacks. The optimization approach provides results that resemble the target image that is impracticable due to the fact that it requires too many function evaluations and gets stuck in local minima, whereas the inversion technique rapidly produces a solution that is more approximate in nature. We show that the best aspects of these two solutions can be fused together into a hybrid method that initializes our optimizer with the prediction of the neural network. This hybrid method opens up the possibility of creating novel materials by stitching together the best aspects of two or more materials, deleting unwanted features through image in-painting, contrast enhancement, or even fusing together two materials. These synthesized materials can also be easily inserted into already existing scenes by the user. In this scene, we made a material mixture to achieve a richer nebula effect inside the glass. We also show in the paper that this hybrid method not only gives a head start to the optimizer by endowing it with a useful initial guess, but provides strictly higher quality outputs than any of the two previous solutions on all of our test cases. Furthermore, if at most a handful of materials are sought, the total modeling times reveal that our technique compares favorably to previous work on mass scale material synthesis. We believe this method will offer an appealing entry point for novices into the world of photorealistic material modeling. Thank you for your attention.
[{"start": 0.0, "end": 10.0, "text": " Creating photorealistic materials for light transport algorithms requires carefully fine-tuning a set of material properties to achieve a desired artistic effect."}, {"start": 10.0, "end": 15.0, "text": " This is a lengthy process that involves a trained artist with specialized knowledge."}, {"start": 15.0, "end": 26.5, "text": " In this work, we propose a system that only requires basic image processing knowledge and enables users without photorealistic rendering experience to create high quality materials."}, {"start": 26.5, "end": 34.0, "text": " This is highly desirable as human thinking is inherently visual and not based on physically based material parameters."}, {"start": 34.0, "end": 48.0, "text": " In our proposed workflow, all the user needs to do is apply a few intuitive transforms to a source image and in the next step our technique produces the closest photorealistic material that approximates this target image."}, {"start": 48.0, "end": 60.5, "text": " One of our key observations is that even though this process target image is often not physically achievable, in many cases a photorealistic material model can be found that closely matches this image."}, {"start": 60.5, "end": 73.0, "text": " Our method generates results in less than 30 seconds and works in the presence of poorly edited target images like the discoloration of the pedestal or the background of the gold material here."}, {"start": 73.0, "end": 83.0, "text": " This technique is especially useful early in the material design process where the artist seeks to rapidly iterate over a variety of possible artistic effects."}, {"start": 83.0, "end": 91.0, "text": " We also propose an extension to predict image sequences with a tight budget of 1-2 seconds per image."}, {"start": 91.0, "end": 103.0, "text": " To achieve this, we propose a simple optimization formulation that is able to produce accurate solutions that takes relatively long due to the lack of a useful initial guess."}, {"start": 103.0, "end": 113.0, "text": " Our other main observation is that an approximate solution can also be achieved without an optimization step by implementing a simple and coder neural network."}, {"start": 113.0, "end": 122.0, "text": " The main advantage of this method is that it produces a solution within a few milliseconds with the drawback that the provided solution is only approximate."}, {"start": 122.0, "end": 127.0, "text": " We refer to this as the inversion technique. Both of these solutions suffer from drawbacks."}, {"start": 127.0, "end": 139.0, "text": " The optimization approach provides results that resemble the target image that is impracticable due to the fact that it requires too many function evaluations and gets stuck in local minima,"}, {"start": 139.0, "end": 145.0, "text": " whereas the inversion technique rapidly produces a solution that is more approximate in nature."}, {"start": 145.0, "end": 155.0, "text": " We show that the best aspects of these two solutions can be fused together into a hybrid method that initializes our optimizer with the prediction of the neural network."}, {"start": 155.0, "end": 165.0, "text": " This hybrid method opens up the possibility of creating novel materials by stitching together the best aspects of two or more materials,"}, {"start": 165.0, "end": 176.0, "text": " deleting unwanted features through image in-painting, contrast enhancement, or even fusing together two materials."}, {"start": 176.0, "end": 182.0, "text": " These synthesized materials can also be easily inserted into already existing scenes by the user."}, {"start": 182.0, "end": 189.0, "text": " In this scene, we made a material mixture to achieve a richer nebula effect inside the glass."}, {"start": 189.0, "end": 198.0, "text": " We also show in the paper that this hybrid method not only gives a head start to the optimizer by endowing it with a useful initial guess,"}, {"start": 198.0, "end": 205.0, "text": " but provides strictly higher quality outputs than any of the two previous solutions on all of our test cases."}, {"start": 205.0, "end": 216.0, "text": " Furthermore, if at most a handful of materials are sought, the total modeling times reveal that our technique compares favorably to previous work on mass scale material synthesis."}, {"start": 216.0, "end": 224.0, "text": " We believe this method will offer an appealing entry point for novices into the world of photorealistic material modeling."}, {"start": 224.0, "end": 249.0, "text": " Thank you for your attention."}]
Two Minute Papers
https://www.youtube.com/watch?v=tGJ4tEwhgo8
Differentiable Rendering is Amazing!
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Their mentioned blog post is available here:https://www.wandb.com/articles/p-picking-a-machine-learning-model 📝 The paper "Reparameterizing discontinuous integrands for differentiable rendering" is available here: https://rgl.epfl.ch/publications/Loubet2019Reparameterizing 📝 Our "Gaussian Material Synthesis" paper is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ The free Rendering course on light transport is available here: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #neuralrendering
Dear Fellow Scholars, this is two-minute papers with Karo Zorna Ifeher. This beautiful scene is from our paper by the name Gaussian Material Synthesis, and it contains more than a hundred different materials, each of which has been learned and synthesized by an AI. None of these days is, and then the lions are alike, each of them have a different material model. Normally, to obtain results like this, an artist has to engage in a direct interaction with an interface that you see here. This contains a ton of parameters, and to be able to use it properly, the artist needs to have years of experience in photorealistic rendering and material modeling. But, unfortunately, the problem gets even worse. Since a proper light simulation program needs to create an image with the new material parameters, this initially results in a noisy image that typically takes 40 to 60 seconds to clear up. We have to wait out these 40 to 60 seconds for every single parameter change that we make. This would take several hours in practical cases. The goal of this project was to speed up workflows like this by teaching an AI the concept of material models, such as metals, minerals, and translucent materials. With our technique, first, we show the user a gallery of random materials with signs a score to each of them, saying that I like this one, I didn't like that one, and get an AI to learn our preferences and recommend new materials for us. We also created a neural render that replaces the light simulation program and creates a near-perfect image of the output in about 4 milliseconds. That's not real time, that's 10 times faster than real time. That is very fast and accurate. However, our neural renderer is limited to the scene that you see here. So, the question is, is it possible to create something that is a little more general? Well, let's have a look at this new work that performs something similar that they call differentiable rendering. The problem formulation is the following. We specify a target image that is either rendered by a computer program or even better the photo. The input is a pitiful approximation of it and now hold on to your papers because it progressively changes the input materials, textures, and even the geometry to match this photo. My goodness, even the geometry. This thing is doing three people's jobs when given a target photo and you haven't seen the best part here because there is an animation that shows how the input evolves over time as we run this technique. As we start out, it almost immediately matches the material properties and the base shape and after that, it refines the geometry to make sure that the more intricate details are also matched properly. As always, some limitations apply, for instance, area light sources are fine, but it doesn't support point light sources may show problems in the presence of discontinuities and mirror-like materials. I cannot wait to see where this ends up a couple of papers down the line and I really hope this thing takes off. In my opinion, this is one of the most refreshing and exciting ideas in photorealistic rendering research as of late. More differentiable rendering papers, please. I would like to stress that there are also other works on differentiable rendering. This is not the first one. However, if you have a closer look at the paper in the description, you will see that it does better than previous techniques. In this series, I try to make you feel how I feel when I read these papers and I hope I have managed this time, but you be the judge. Please let me know in the comments. And if this got you excited to learn more about light transport, I am holding a master-level course on it at the Technical University of Vienna. This course used to take place behind closed doors, but I feel that the teachings shouldn't only be available for the 20 to 30 people who can afford a university education, but they should be available for everyone. So, recorded the entirety of the course and it is now available for everyone free of charge. If you are interested, have a look at the video description to watch them. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. In this tutorial, they show you how to visualize your machine learning models and how to choose the best one with the help of their tools. Make sure to visit them through www.wndb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zorna Ifeher."}, {"start": 4.0, "end": 8.8, "text": " This beautiful scene is from our paper by the name Gaussian Material Synthesis,"}, {"start": 8.8, "end": 11.8, "text": " and it contains more than a hundred different materials,"}, {"start": 11.8, "end": 15.6, "text": " each of which has been learned and synthesized by an AI."}, {"start": 15.6, "end": 18.6, "text": " None of these days is, and then the lions are alike,"}, {"start": 18.6, "end": 21.2, "text": " each of them have a different material model."}, {"start": 21.2, "end": 23.6, "text": " Normally, to obtain results like this,"}, {"start": 23.6, "end": 28.8, "text": " an artist has to engage in a direct interaction with an interface that you see here."}, {"start": 28.8, "end": 31.2, "text": " This contains a ton of parameters,"}, {"start": 31.2, "end": 33.2, "text": " and to be able to use it properly,"}, {"start": 33.2, "end": 38.8, "text": " the artist needs to have years of experience in photorealistic rendering and material modeling."}, {"start": 38.8, "end": 42.4, "text": " But, unfortunately, the problem gets even worse."}, {"start": 42.4, "end": 47.6, "text": " Since a proper light simulation program needs to create an image with the new material parameters,"}, {"start": 47.6, "end": 54.0, "text": " this initially results in a noisy image that typically takes 40 to 60 seconds to clear up."}, {"start": 54.0, "end": 59.4, "text": " We have to wait out these 40 to 60 seconds for every single parameter change that we make."}, {"start": 59.4, "end": 62.6, "text": " This would take several hours in practical cases."}, {"start": 62.6, "end": 65.8, "text": " The goal of this project was to speed up workflows like this"}, {"start": 65.8, "end": 69.2, "text": " by teaching an AI the concept of material models,"}, {"start": 69.2, "end": 73.4, "text": " such as metals, minerals, and translucent materials."}, {"start": 73.4, "end": 77.7, "text": " With our technique, first, we show the user a gallery of random materials"}, {"start": 77.7, "end": 80.2, "text": " with signs a score to each of them,"}, {"start": 80.2, "end": 82.0, "text": " saying that I like this one,"}, {"start": 82.0, "end": 86.1, "text": " I didn't like that one, and get an AI to learn our preferences"}, {"start": 86.1, "end": 88.7, "text": " and recommend new materials for us."}, {"start": 88.7, "end": 93.1, "text": " We also created a neural render that replaces the light simulation program"}, {"start": 93.1, "end": 97.9, "text": " and creates a near-perfect image of the output in about 4 milliseconds."}, {"start": 97.9, "end": 102.4, "text": " That's not real time, that's 10 times faster than real time."}, {"start": 102.4, "end": 105.0, "text": " That is very fast and accurate."}, {"start": 105.0, "end": 109.5, "text": " However, our neural renderer is limited to the scene that you see here."}, {"start": 109.5, "end": 114.7, "text": " So, the question is, is it possible to create something that is a little more general?"}, {"start": 114.7, "end": 118.6, "text": " Well, let's have a look at this new work that performs something similar"}, {"start": 118.6, "end": 121.5, "text": " that they call differentiable rendering."}, {"start": 121.5, "end": 124.1, "text": " The problem formulation is the following."}, {"start": 124.1, "end": 128.8, "text": " We specify a target image that is either rendered by a computer program"}, {"start": 128.8, "end": 131.4, "text": " or even better the photo."}, {"start": 131.4, "end": 134.2, "text": " The input is a pitiful approximation of it"}, {"start": 134.2, "end": 136.5, "text": " and now hold on to your papers"}, {"start": 136.5, "end": 140.5, "text": " because it progressively changes the input materials, textures,"}, {"start": 140.5, "end": 143.9, "text": " and even the geometry to match this photo."}, {"start": 143.9, "end": 146.7, "text": " My goodness, even the geometry."}, {"start": 146.7, "end": 150.7, "text": " This thing is doing three people's jobs when given a target photo"}, {"start": 150.7, "end": 153.0, "text": " and you haven't seen the best part here"}, {"start": 153.0, "end": 157.4, "text": " because there is an animation that shows how the input evolves over time"}, {"start": 157.4, "end": 159.0, "text": " as we run this technique."}, {"start": 159.0, "end": 163.1, "text": " As we start out, it almost immediately matches the material properties"}, {"start": 163.1, "end": 167.5, "text": " and the base shape and after that, it refines the geometry to make sure"}, {"start": 167.5, "end": 171.29999999999998, "text": " that the more intricate details are also matched properly."}, {"start": 171.29999999999998, "end": 174.5, "text": " As always, some limitations apply, for instance,"}, {"start": 174.5, "end": 178.7, "text": " area light sources are fine, but it doesn't support point light sources"}, {"start": 178.7, "end": 183.5, "text": " may show problems in the presence of discontinuities and mirror-like materials."}, {"start": 183.5, "end": 187.6, "text": " I cannot wait to see where this ends up a couple of papers down the line"}, {"start": 187.6, "end": 190.2, "text": " and I really hope this thing takes off."}, {"start": 190.2, "end": 194.39999999999998, "text": " In my opinion, this is one of the most refreshing and exciting ideas"}, {"start": 194.39999999999998, "end": 197.2, "text": " in photorealistic rendering research as of late."}, {"start": 197.2, "end": 199.89999999999998, "text": " More differentiable rendering papers, please."}, {"start": 199.89999999999998, "end": 204.29999999999998, "text": " I would like to stress that there are also other works on differentiable rendering."}, {"start": 204.29999999999998, "end": 206.29999999999998, "text": " This is not the first one."}, {"start": 206.29999999999998, "end": 209.7, "text": " However, if you have a closer look at the paper in the description,"}, {"start": 209.7, "end": 212.79999999999998, "text": " you will see that it does better than previous techniques."}, {"start": 212.79999999999998, "end": 217.7, "text": " In this series, I try to make you feel how I feel when I read these papers"}, {"start": 217.7, "end": 221.0, "text": " and I hope I have managed this time, but you be the judge."}, {"start": 221.0, "end": 222.79999999999998, "text": " Please let me know in the comments."}, {"start": 222.79999999999998, "end": 226.1, "text": " And if this got you excited to learn more about light transport,"}, {"start": 226.1, "end": 230.5, "text": " I am holding a master-level course on it at the Technical University of Vienna."}, {"start": 230.5, "end": 233.39999999999998, "text": " This course used to take place behind closed doors,"}, {"start": 233.39999999999998, "end": 238.5, "text": " but I feel that the teachings shouldn't only be available for the 20 to 30 people"}, {"start": 238.5, "end": 243.0, "text": " who can afford a university education, but they should be available for everyone."}, {"start": 243.0, "end": 249.0, "text": " So, recorded the entirety of the course and it is now available for everyone free of charge."}, {"start": 249.0, "end": 252.6, "text": " If you are interested, have a look at the video description to watch them."}, {"start": 252.6, "end": 255.7, "text": " This episode has been supported by weights and biases."}, {"start": 255.7, "end": 260.5, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 260.5, "end": 264.3, "text": " It can save you a ton of time and money in these projects"}, {"start": 264.3, "end": 269.4, "text": " and is being used by OpenAI, Toyota Research, Stanford and Berkeley."}, {"start": 269.4, "end": 273.59999999999997, "text": " In this tutorial, they show you how to visualize your machine learning models"}, {"start": 273.59999999999997, "end": 276.9, "text": " and how to choose the best one with the help of their tools."}, {"start": 276.9, "end": 283.4, "text": " Make sure to visit them through www.wndb.com slash papers"}, {"start": 283.4, "end": 287.9, "text": " or just click the link in the video description and you can get a free demo today."}, {"start": 287.9, "end": 291.9, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 291.9, "end": 299.9, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=NlZJlFCh8MU
This AI Creates A Moving Digital Avatar Of You
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Neural Volumes: Learning Dynamic Renderable Volumes from Images" is available here: https://research.fb.com/publications/neural-volumes-learning-dynamic-renderable-volumes-from-images/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Erik de Bruijn, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1280538/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
This episode has been supported by Lambda. Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. In this series, we talk about research on all kinds of physics simulations, including fluids, collision physics, and we have even ventured into hair simulations. We mostly talk about how the individual hair strands should move and how they should look in terms of color and reflectance. Having these beautiful videos takes getting many, many moving parts right, for instance, before all of that, the very first step is not any of those steps. First, we have to get the 3D geometry of these hair styles into our simulation system. In a previous episode, we have seen an excellent work that does this well for human hair. But what if we would like to model not human hair, but something completely different? Well, hold on to your papers, because this new work is so general that it can look at an input image or video and give us not only a model of the human hair, but human skin, garments, and of course, my favorite, smoke plumes and more. But if you look here, this part begs the following question. The input is an image and the output also looks like an image, and we need to make them similar. So, what's the big deal here? A copying machine can do that. No? Well, not really. Here's why. To create the output, we are working with something that indeed looks like an image, but it is not an image. It is a 3-dimensional cube in which we have to specify color and opacity values everywhere. After that, we simulate rays of light passing through this volume, which is a technique that we call ray marching, and this process has to produce the same 2D image through ray marching as what was given as an input. That's much, much harder than building a copying machine. As you see here, normally, this does not work well at all, because, for instance, a standard algorithm sees lights in the background and assumes that these are really bright and dense points. That is kind of true, but they are usually not even part of the data that we would like to reconstruct. To solve this issue, the authors propose learning to tell the foreground and background images apart, so they can be separated before we start the reconstruction of the human. And this is a good research paper, which means that if it contains multiple new techniques, each of them are tested separately to know how much they contribute to the final results. We get the previously seen dreadful results without the background separation step. Here are the results with the learned backgrounds. We can still see the lights due to the way the final image is constructed, and the fact that we have so little of this halo effect is really cool. Here, you see the results with the true background data where the background learning step is not present. Note that this is cheating, because this data is not available for all cameras and backgrounds, however, it is a great way to test the quality of this learning step. The comparison of the learned method against this reveals that the two are very close, which is exactly what we are looking for. And finally, the input footage is also shown for reference. This is ultimately what we are trying to achieve, and as you see, the output is quite close to it. The final algorithm excels at reconstructing volume data for toys, small plumes, and humans alike. And the coolest part is that it works for not only stationary inputs, but for animations as well. Wait, actually, there is something that is perhaps even cooler with the magic of neural networks and latent spaces we can even animate this data. Here you see an example of that where an avatar is animated in real time by moving around this magenta dot. A limiting factor here is the resolution of this reconstruction. If you look closely, you can see that some fine details are missing, but you know the saying, given the rate of progress in machine learning research, two more papers down the line, and this will likely be orders of magnitude better. And if you feel that you always need to take your daily dose of papers, my statistics show that many of you are subscribed, but didn't use the bell icon. If you click this bell icon, you will never miss a future episode and can properly engage in your paper addiction. This episode has been supported by Lambda. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com slash papers or click the link in the video description and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 3.34, "text": " This episode has been supported by Lambda."}, {"start": 3.34, "end": 7.42, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir."}, {"start": 7.42, "end": 11.84, "text": " In this series, we talk about research on all kinds of physics simulations, including"}, {"start": 11.84, "end": 17.52, "text": " fluids, collision physics, and we have even ventured into hair simulations."}, {"start": 17.52, "end": 22.36, "text": " We mostly talk about how the individual hair strands should move and how they should"}, {"start": 22.36, "end": 25.92, "text": " look in terms of color and reflectance."}, {"start": 25.92, "end": 31.160000000000004, "text": " Having these beautiful videos takes getting many, many moving parts right, for instance,"}, {"start": 31.160000000000004, "end": 35.72, "text": " before all of that, the very first step is not any of those steps."}, {"start": 35.72, "end": 41.18, "text": " First, we have to get the 3D geometry of these hair styles into our simulation system."}, {"start": 41.18, "end": 46.88, "text": " In a previous episode, we have seen an excellent work that does this well for human hair."}, {"start": 46.88, "end": 53.0, "text": " But what if we would like to model not human hair, but something completely different?"}, {"start": 53.0, "end": 58.6, "text": " Well, hold on to your papers, because this new work is so general that it can look at"}, {"start": 58.6, "end": 65.84, "text": " an input image or video and give us not only a model of the human hair, but human skin,"}, {"start": 65.84, "end": 69.96000000000001, "text": " garments, and of course, my favorite, smoke plumes and more."}, {"start": 69.96000000000001, "end": 73.96000000000001, "text": " But if you look here, this part begs the following question."}, {"start": 73.96000000000001, "end": 79.48, "text": " The input is an image and the output also looks like an image, and we need to make them"}, {"start": 79.48, "end": 80.48, "text": " similar."}, {"start": 80.48, "end": 82.88, "text": " So, what's the big deal here?"}, {"start": 82.88, "end": 84.96, "text": " A copying machine can do that."}, {"start": 84.96, "end": 85.96, "text": " No?"}, {"start": 85.96, "end": 88.0, "text": " Well, not really."}, {"start": 88.0, "end": 89.0, "text": " Here's why."}, {"start": 89.0, "end": 93.67999999999999, "text": " To create the output, we are working with something that indeed looks like an image,"}, {"start": 93.67999999999999, "end": 95.36, "text": " but it is not an image."}, {"start": 95.36, "end": 102.36, "text": " It is a 3-dimensional cube in which we have to specify color and opacity values everywhere."}, {"start": 102.36, "end": 106.56, "text": " After that, we simulate rays of light passing through this volume, which is a technique"}, {"start": 106.56, "end": 112.08, "text": " that we call ray marching, and this process has to produce the same 2D image through"}, {"start": 112.08, "end": 115.24, "text": " ray marching as what was given as an input."}, {"start": 115.24, "end": 119.44, "text": " That's much, much harder than building a copying machine."}, {"start": 119.44, "end": 124.92, "text": " As you see here, normally, this does not work well at all, because, for instance, a standard"}, {"start": 124.92, "end": 130.2, "text": " algorithm sees lights in the background and assumes that these are really bright and"}, {"start": 130.2, "end": 131.6, "text": " dense points."}, {"start": 131.6, "end": 136.24, "text": " That is kind of true, but they are usually not even part of the data that we would like"}, {"start": 136.24, "end": 137.56, "text": " to reconstruct."}, {"start": 137.56, "end": 142.64000000000001, "text": " To solve this issue, the authors propose learning to tell the foreground and background images"}, {"start": 142.64000000000001, "end": 148.6, "text": " apart, so they can be separated before we start the reconstruction of the human."}, {"start": 148.6, "end": 153.88, "text": " And this is a good research paper, which means that if it contains multiple new techniques,"}, {"start": 153.88, "end": 160.0, "text": " each of them are tested separately to know how much they contribute to the final results."}, {"start": 160.0, "end": 165.2, "text": " We get the previously seen dreadful results without the background separation step."}, {"start": 165.2, "end": 167.79999999999998, "text": " Here are the results with the learned backgrounds."}, {"start": 167.79999999999998, "end": 172.88, "text": " We can still see the lights due to the way the final image is constructed, and the fact"}, {"start": 172.88, "end": 176.51999999999998, "text": " that we have so little of this halo effect is really cool."}, {"start": 176.51999999999998, "end": 181.51999999999998, "text": " Here, you see the results with the true background data where the background learning step is"}, {"start": 181.51999999999998, "end": 183.0, "text": " not present."}, {"start": 183.0, "end": 188.76, "text": " Note that this is cheating, because this data is not available for all cameras and backgrounds,"}, {"start": 188.76, "end": 194.04, "text": " however, it is a great way to test the quality of this learning step."}, {"start": 194.04, "end": 199.44, "text": " The comparison of the learned method against this reveals that the two are very close,"}, {"start": 199.44, "end": 201.79999999999998, "text": " which is exactly what we are looking for."}, {"start": 201.79999999999998, "end": 205.04, "text": " And finally, the input footage is also shown for reference."}, {"start": 205.04, "end": 210.16, "text": " This is ultimately what we are trying to achieve, and as you see, the output is quite close"}, {"start": 210.16, "end": 211.16, "text": " to it."}, {"start": 211.16, "end": 217.12, "text": " The final algorithm excels at reconstructing volume data for toys, small plumes, and humans"}, {"start": 217.12, "end": 218.35999999999999, "text": " alike."}, {"start": 218.35999999999999, "end": 223.79999999999998, "text": " And the coolest part is that it works for not only stationary inputs, but for animations"}, {"start": 223.8, "end": 224.8, "text": " as well."}, {"start": 224.8, "end": 230.44, "text": " Wait, actually, there is something that is perhaps even cooler with the magic of neural"}, {"start": 230.44, "end": 234.84, "text": " networks and latent spaces we can even animate this data."}, {"start": 234.84, "end": 240.56, "text": " Here you see an example of that where an avatar is animated in real time by moving around"}, {"start": 240.56, "end": 242.4, "text": " this magenta dot."}, {"start": 242.4, "end": 246.0, "text": " A limiting factor here is the resolution of this reconstruction."}, {"start": 246.0, "end": 250.60000000000002, "text": " If you look closely, you can see that some fine details are missing, but you know the"}, {"start": 250.6, "end": 255.44, "text": " saying, given the rate of progress in machine learning research, two more papers down the"}, {"start": 255.44, "end": 259.12, "text": " line, and this will likely be orders of magnitude better."}, {"start": 259.12, "end": 264.04, "text": " And if you feel that you always need to take your daily dose of papers, my statistics show"}, {"start": 264.04, "end": 267.92, "text": " that many of you are subscribed, but didn't use the bell icon."}, {"start": 267.92, "end": 272.44, "text": " If you click this bell icon, you will never miss a future episode and can properly engage"}, {"start": 272.44, "end": 274.2, "text": " in your paper addiction."}, {"start": 274.2, "end": 276.68, "text": " This episode has been supported by Lambda."}, {"start": 276.68, "end": 281.8, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 281.8, "end": 284.16, "text": " check out Lambda GPU Cloud."}, {"start": 284.16, "end": 288.48, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 288.48, "end": 291.88, "text": " that they are offering GPU cloud services as well."}, {"start": 291.88, "end": 299.24, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 299.24, "end": 304.08, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 304.08, "end": 309.76, "text": " And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 309.76, "end": 311.84, "text": " AWS and Azure."}, {"start": 311.84, "end": 316.68, "text": " Make sure to go to LambdaLabs.com slash papers or click the link in the video description"}, {"start": 316.68, "end": 320.36, "text": " and sign up for one of their amazing GPU instances today."}, {"start": 320.36, "end": 323.47999999999996, "text": " Our thanks to Lambda for helping us make better videos for you."}, {"start": 323.48, "end": 334.64000000000004, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=jtlrWblOyP4
DeepMind’s AlphaStar: A Grandmaster Level StarCraft 2 AI!
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Their mentioned blog post: https://www.wandb.com/articles/ml-best-practices-test-driven-development 📝 The paper "#AlphaStar: Grandmaster level in StarCraft II using multi-agent reinforcement learning" is available here: https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning Matches versus Serral, casted by Artosis (this is a playlist): https://www.youtube.com/watch?v=OxseexGkv_Q&list=PLojXIrB9Xau29fR-ZSdbFllI-ZCuH6urt One more incredible match to watch that I loved (note: explicit language): https://www.youtube.com/watch?v=pUEPsHojUdw 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. The paper we are going to cover today, in my view, is one of the more important things that happened in AI research lately. In the last few years, we have seen DeepMind's AI defeat the best goal players in the world, and after Open AI's venture in the game of Dota 2, DeepMind embarked on a journey to defeat pro players in Starcraft 2, a real-time strategy game. This is a game that requires a great deal of mechanical skill, split-second decision-making, and we have imperfect information as we only see what our units can see. A nightmare situation for NEAI. The previous version of Alpha Star we covered in this series was able to beat at least mid-grandmaster level players, which is truly remarkable, but as with every project of this complexity, there were limitations and caveats. In our earlier video, the paper was still pending, and now it has finally appeared, so my sleepless nights have officially ended, at least for this work, and now we can look into some more results. One of the limitations of the earlier version was that DeepMind needed to further tune some of the parameters and rules to make sure that the AI and the players play on an even footing. For instance, the camera movement and the number of actions the AI can make per minute has been limited some more and are now more human-like. TLO, a professional starcraft 2 player, noted that this time around, it indeed felt very much like playing another human player. The second limitation was that the AI was only able to play ProDOS, which is one of the three races available in the game. This new version can now play all three races, and here you see its MMR ratings, a number that describes the skill level of the AI and for nonexperts, win percentages for each individual race. As you see, it is still the best with ProDOS, however, all three races are well over the 99% winrate mark. Absolutely amazing. In this version, there is also more emphasis on self-play, and the goal is to create a learning algorithm that is able to learn how to play really well by playing against previous versions of itself millions and millions of times. This is, again, one of those curious cases where the agents train against themselves in a simulated world, and then when the final AI was deployed on the official game servers, it played against human players for the very first time. I promise to tell you about the results in a moment, but for now, please note that relying more on self-play is extremely difficult. Let me explain why. Play agents have a well-known drawback of forgetting, which means that as they improve, they might forget how to win against previous versions of themselves. Since Starcraft 2 is designed in a way that every unit and strategy has an antidote, we have a rock paper scissors kind of situation where the agent plays rock all the time because it has encountered a lot of scissors lately. Then, when a lot of papers appear, no pun intended, it will start playing scissors more often and completely forget about the olden times when the rock was all the rage. And on and on this circle goes without any relearning or progress. This doesn't just lead to suboptimal results. This leads to disastrously bad learning if any learning at all. But it gets even worse. This situation opens up the possibility for an exploiter to take advantage of this information and easily beat these agents. In concrete Starcraft terms, such an exploit could be trying to defeat the Alpha Star A.I. early by rushing it with workers and warping in for on cannons to their base. This strategy is also known as a cannon rush and as you can see here the red agent performing this, it can quickly defeat the unsuspecting blue opponent. So, how do we defend against such exploits? Deep might use a clever idea here by trying to turn the whole thing around and use these exploits to its advantage. How? Well, they propose a novel self-play method where they additionally insert these exploitor A.I.s to expose the main A.I.s flaws and create an overall more knowledgeable and robust agent. So, how did it go? Well, as a result, you can see how the green agent has learned to adapt to this by pulling its worker line and successfully defended the cannon rush of the red A.I. This is proper machine learning progress happening right before our eyes. Glorious. This is just one example of using exploiters to create a better main A.I. but the training process continually creates newer and newer kinds of exploiters. For instance, you will see in a moment that it later came up with a nasty strategy, including attacking the main base with cloaking units. One of the coolest parts of this work, in my opinion, is that this kind of exploitation is a general concept that will surely come useful for completely different test domains as well. We noted earlier that it finally started playing humans for the first time on the official servers. So, how did that go? In my opinion, given the difficulty and the vast search space we have in Starcraft 2, creating a self-learning A.I. that has the skills of an amateur player is absolutely incredible. But, that's not what happened. Hold onto your papers because it quickly reached Grandmaster level with all three races and ranked above 99.8% of the officially ranked human players. Bravo deep-mind. Stunning work. Later, it also played Cerel a decorated world champion zerg player, one of the most dominant players of our time. I will not spoil the results, especially given that there were limitations as Cerel wasn't playing on his own equipment, but I will note that Artosis, a well-known and beloved Starcraft player and commentator analyzed these matches and said, quote, the results are so impressive and I really feel like we can learn a lot from it. I would be surprised if a non-human entity could get this good and there was nothing to learn. His commentary was excellent and is tailored towards people who don't know anything about the game. He'll often pause the game and slowly explain what is going on. In these matches, I love the fact that so many times it makes so many plays that we consider to be very poor and somehow, overall, it still plays outrageously well. It has unit compositions that nobody in their right minds would play. It is kind of like a drunken, kung fu master, but in Starcraft 2. Love it. But no more spoilers, I think you should really watch these matches and of course I put a link to his analysis videos in the video description. Even though both this video and the paper appears to be laser-focused on playing Starcraft 2, it is of utmost importance to note that this is still just a testbed to demonstrate the learning capabilities of this AI. As amazing as it sounds, DeepMind wasn't just looking to spend millions and millions of dollars on research just to play video games. The building blocks of Alpha Star are meant to be reasonably general, which means that parts of this AI can be reused for other things. For instance, Demisasab is mentioned, weather prediction and climate modeling as examples. If you take only one thought from this video, let it be this one. There is really so much to talk about, so make sure to head over to the video description, watch the matches and check out the paper as well. The evaluation section is as detailed as it can possibly get. What a time to be alive. This episode has been supported by weights and biases. weights and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. Here you see a technical case study they published on how a team can work together to build and deploy machine learning models in an organized way. Make sure to visit them through WendeeB.com slash papers, w-a-n-d-b.com slash papers, or just click the link in the video description and you can get the free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.36, "end": 9.040000000000001, "text": " The paper we are going to cover today, in my view, is one of the more important things"}, {"start": 9.040000000000001, "end": 11.52, "text": " that happened in AI research lately."}, {"start": 11.52, "end": 16.84, "text": " In the last few years, we have seen DeepMind's AI defeat the best goal players in the world,"}, {"start": 16.84, "end": 22.52, "text": " and after Open AI's venture in the game of Dota 2, DeepMind embarked on a journey to"}, {"start": 22.52, "end": 26.8, "text": " defeat pro players in Starcraft 2, a real-time strategy game."}, {"start": 26.8, "end": 32.2, "text": " This is a game that requires a great deal of mechanical skill, split-second decision-making,"}, {"start": 32.2, "end": 36.96, "text": " and we have imperfect information as we only see what our units can see."}, {"start": 36.96, "end": 39.72, "text": " A nightmare situation for NEAI."}, {"start": 39.72, "end": 45.480000000000004, "text": " The previous version of Alpha Star we covered in this series was able to beat at least mid-grandmaster"}, {"start": 45.480000000000004, "end": 51.44, "text": " level players, which is truly remarkable, but as with every project of this complexity,"}, {"start": 51.44, "end": 53.8, "text": " there were limitations and caveats."}, {"start": 53.8, "end": 60.12, "text": " In our earlier video, the paper was still pending, and now it has finally appeared, so my sleepless"}, {"start": 60.12, "end": 65.36, "text": " nights have officially ended, at least for this work, and now we can look into some more"}, {"start": 65.36, "end": 66.36, "text": " results."}, {"start": 66.36, "end": 71.32, "text": " One of the limitations of the earlier version was that DeepMind needed to further tune"}, {"start": 71.32, "end": 76.72, "text": " some of the parameters and rules to make sure that the AI and the players play on an"}, {"start": 76.72, "end": 78.2, "text": " even footing."}, {"start": 78.2, "end": 83.03999999999999, "text": " For instance, the camera movement and the number of actions the AI can make per minute"}, {"start": 83.04, "end": 86.96000000000001, "text": " has been limited some more and are now more human-like."}, {"start": 86.96000000000001, "end": 92.76, "text": " TLO, a professional starcraft 2 player, noted that this time around, it indeed felt very"}, {"start": 92.76, "end": 95.44000000000001, "text": " much like playing another human player."}, {"start": 95.44000000000001, "end": 100.4, "text": " The second limitation was that the AI was only able to play ProDOS, which is one of the"}, {"start": 100.4, "end": 102.92, "text": " three races available in the game."}, {"start": 102.92, "end": 108.60000000000001, "text": " This new version can now play all three races, and here you see its MMR ratings, a number"}, {"start": 108.6, "end": 114.03999999999999, "text": " that describes the skill level of the AI and for nonexperts, win percentages for each"}, {"start": 114.03999999999999, "end": 115.75999999999999, "text": " individual race."}, {"start": 115.75999999999999, "end": 121.16, "text": " As you see, it is still the best with ProDOS, however, all three races are well over the"}, {"start": 121.16, "end": 123.8, "text": " 99% winrate mark."}, {"start": 123.8, "end": 125.28, "text": " Absolutely amazing."}, {"start": 125.28, "end": 130.76, "text": " In this version, there is also more emphasis on self-play, and the goal is to create a learning"}, {"start": 130.76, "end": 136.84, "text": " algorithm that is able to learn how to play really well by playing against previous versions"}, {"start": 136.84, "end": 140.36, "text": " of itself millions and millions of times."}, {"start": 140.36, "end": 145.4, "text": " This is, again, one of those curious cases where the agents train against themselves in"}, {"start": 145.4, "end": 151.4, "text": " a simulated world, and then when the final AI was deployed on the official game servers,"}, {"start": 151.4, "end": 154.88, "text": " it played against human players for the very first time."}, {"start": 154.88, "end": 159.16, "text": " I promise to tell you about the results in a moment, but for now, please note that"}, {"start": 159.16, "end": 163.36, "text": " relying more on self-play is extremely difficult."}, {"start": 163.36, "end": 165.08, "text": " Let me explain why."}, {"start": 165.08, "end": 170.60000000000002, "text": " Play agents have a well-known drawback of forgetting, which means that as they improve, they"}, {"start": 170.60000000000002, "end": 174.88000000000002, "text": " might forget how to win against previous versions of themselves."}, {"start": 174.88000000000002, "end": 180.16000000000003, "text": " Since Starcraft 2 is designed in a way that every unit and strategy has an antidote, we"}, {"start": 180.16000000000003, "end": 185.48000000000002, "text": " have a rock paper scissors kind of situation where the agent plays rock all the time because"}, {"start": 185.48000000000002, "end": 188.16000000000003, "text": " it has encountered a lot of scissors lately."}, {"start": 188.16000000000003, "end": 194.32000000000002, "text": " Then, when a lot of papers appear, no pun intended, it will start playing scissors more often"}, {"start": 194.32, "end": 199.48, "text": " and completely forget about the olden times when the rock was all the rage."}, {"start": 199.48, "end": 204.07999999999998, "text": " And on and on this circle goes without any relearning or progress."}, {"start": 204.07999999999998, "end": 206.56, "text": " This doesn't just lead to suboptimal results."}, {"start": 206.56, "end": 211.07999999999998, "text": " This leads to disastrously bad learning if any learning at all."}, {"start": 211.07999999999998, "end": 213.07999999999998, "text": " But it gets even worse."}, {"start": 213.07999999999998, "end": 218.64, "text": " This situation opens up the possibility for an exploiter to take advantage of this information"}, {"start": 218.64, "end": 220.92, "text": " and easily beat these agents."}, {"start": 220.92, "end": 226.64, "text": " In concrete Starcraft terms, such an exploit could be trying to defeat the Alpha Star A.I."}, {"start": 226.64, "end": 231.44, "text": " early by rushing it with workers and warping in for on cannons to their base."}, {"start": 231.44, "end": 236.56, "text": " This strategy is also known as a cannon rush and as you can see here the red agent performing"}, {"start": 236.56, "end": 240.83999999999997, "text": " this, it can quickly defeat the unsuspecting blue opponent."}, {"start": 240.83999999999997, "end": 244.72, "text": " So, how do we defend against such exploits?"}, {"start": 244.72, "end": 249.83999999999997, "text": " Deep might use a clever idea here by trying to turn the whole thing around and use these"}, {"start": 249.84, "end": 251.92000000000002, "text": " exploits to its advantage."}, {"start": 251.92000000000002, "end": 252.92000000000002, "text": " How?"}, {"start": 252.92000000000002, "end": 258.72, "text": " Well, they propose a novel self-play method where they additionally insert these exploitor A.I.s"}, {"start": 258.72, "end": 265.32, "text": " to expose the main A.I.s flaws and create an overall more knowledgeable and robust agent."}, {"start": 265.32, "end": 267.2, "text": " So, how did it go?"}, {"start": 267.2, "end": 272.24, "text": " Well, as a result, you can see how the green agent has learned to adapt to this by pulling"}, {"start": 272.24, "end": 276.68, "text": " its worker line and successfully defended the cannon rush of the red A.I."}, {"start": 276.68, "end": 281.40000000000003, "text": " This is proper machine learning progress happening right before our eyes."}, {"start": 281.40000000000003, "end": 282.76, "text": " Glorious."}, {"start": 282.76, "end": 287.56, "text": " This is just one example of using exploiters to create a better main A.I. but the training"}, {"start": 287.56, "end": 292.08, "text": " process continually creates newer and newer kinds of exploiters."}, {"start": 292.08, "end": 297.64, "text": " For instance, you will see in a moment that it later came up with a nasty strategy, including"}, {"start": 297.64, "end": 300.64, "text": " attacking the main base with cloaking units."}, {"start": 300.64, "end": 305.4, "text": " One of the coolest parts of this work, in my opinion, is that this kind of exploitation"}, {"start": 305.4, "end": 310.2, "text": " is a general concept that will surely come useful for completely different test domains"}, {"start": 310.2, "end": 311.2, "text": " as well."}, {"start": 311.2, "end": 316.12, "text": " We noted earlier that it finally started playing humans for the first time on the official"}, {"start": 316.12, "end": 317.12, "text": " servers."}, {"start": 317.12, "end": 319.0, "text": " So, how did that go?"}, {"start": 319.0, "end": 324.52, "text": " In my opinion, given the difficulty and the vast search space we have in Starcraft 2,"}, {"start": 324.52, "end": 330.28, "text": " creating a self-learning A.I. that has the skills of an amateur player is absolutely incredible."}, {"start": 330.28, "end": 332.4, "text": " But, that's not what happened."}, {"start": 332.4, "end": 337.76, "text": " Hold onto your papers because it quickly reached Grandmaster level with all three races and"}, {"start": 337.76, "end": 342.91999999999996, "text": " ranked above 99.8% of the officially ranked human players."}, {"start": 342.91999999999996, "end": 344.56, "text": " Bravo deep-mind."}, {"start": 344.56, "end": 345.56, "text": " Stunning work."}, {"start": 345.56, "end": 351.12, "text": " Later, it also played Cerel a decorated world champion zerg player, one of the most dominant"}, {"start": 351.12, "end": 352.44, "text": " players of our time."}, {"start": 352.44, "end": 357.0, "text": " I will not spoil the results, especially given that there were limitations as Cerel"}, {"start": 357.0, "end": 361.64, "text": " wasn't playing on his own equipment, but I will note that Artosis, a well-known and"}, {"start": 361.64, "end": 367.71999999999997, "text": " beloved Starcraft player and commentator analyzed these matches and said, quote, the results"}, {"start": 367.71999999999997, "end": 371.64, "text": " are so impressive and I really feel like we can learn a lot from it."}, {"start": 371.64, "end": 376.2, "text": " I would be surprised if a non-human entity could get this good and there was nothing to"}, {"start": 376.2, "end": 377.2, "text": " learn."}, {"start": 377.2, "end": 381.8, "text": " His commentary was excellent and is tailored towards people who don't know anything about"}, {"start": 381.8, "end": 382.8, "text": " the game."}, {"start": 382.8, "end": 386.24, "text": " He'll often pause the game and slowly explain what is going on."}, {"start": 386.24, "end": 391.4, "text": " In these matches, I love the fact that so many times it makes so many plays that we"}, {"start": 391.4, "end": 397.79999999999995, "text": " consider to be very poor and somehow, overall, it still plays outrageously well."}, {"start": 397.79999999999995, "end": 401.79999999999995, "text": " It has unit compositions that nobody in their right minds would play."}, {"start": 401.79999999999995, "end": 406.96, "text": " It is kind of like a drunken, kung fu master, but in Starcraft 2."}, {"start": 406.96, "end": 407.96, "text": " Love it."}, {"start": 407.96, "end": 412.52, "text": " But no more spoilers, I think you should really watch these matches and of course I put a"}, {"start": 412.52, "end": 415.56, "text": " link to his analysis videos in the video description."}, {"start": 415.56, "end": 420.96, "text": " Even though both this video and the paper appears to be laser-focused on playing Starcraft"}, {"start": 420.96, "end": 426.44, "text": " 2, it is of utmost importance to note that this is still just a testbed to demonstrate the"}, {"start": 426.44, "end": 428.91999999999996, "text": " learning capabilities of this AI."}, {"start": 428.91999999999996, "end": 433.88, "text": " As amazing as it sounds, DeepMind wasn't just looking to spend millions and millions of"}, {"start": 433.88, "end": 437.03999999999996, "text": " dollars on research just to play video games."}, {"start": 437.03999999999996, "end": 442.12, "text": " The building blocks of Alpha Star are meant to be reasonably general, which means that parts"}, {"start": 442.12, "end": 445.32, "text": " of this AI can be reused for other things."}, {"start": 445.32, "end": 451.08, "text": " For instance, Demisasab is mentioned, weather prediction and climate modeling as examples."}, {"start": 451.08, "end": 454.68, "text": " If you take only one thought from this video, let it be this one."}, {"start": 454.68, "end": 459.8, "text": " There is really so much to talk about, so make sure to head over to the video description,"}, {"start": 459.8, "end": 462.68, "text": " watch the matches and check out the paper as well."}, {"start": 462.68, "end": 467.03999999999996, "text": " The evaluation section is as detailed as it can possibly get."}, {"start": 467.03999999999996, "end": 468.76, "text": " What a time to be alive."}, {"start": 468.76, "end": 472.0, "text": " This episode has been supported by weights and biases."}, {"start": 472.0, "end": 476.84, "text": " weights and biases provides tools to track your experiments in your deep learning projects."}, {"start": 476.84, "end": 482.72, "text": " It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota"}, {"start": 482.72, "end": 485.4, "text": " Research, Stanford and Berkeley."}, {"start": 485.4, "end": 491.0, "text": " Here you see a technical case study they published on how a team can work together to build"}, {"start": 491.0, "end": 494.4, "text": " and deploy machine learning models in an organized way."}, {"start": 494.4, "end": 501.16, "text": " Make sure to visit them through WendeeB.com slash papers, w-a-n-d-b.com slash papers, or"}, {"start": 501.16, "end": 505.40000000000003, "text": " just click the link in the video description and you can get the free demo today."}, {"start": 505.40000000000003, "end": 509.04, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 509.04, "end": 539.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=4J0cpdR7qec
This AI Makes The Mona Lisa Speak…And More!
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Few-shot Video-to-Video Synthesis" is available here: https://nvlabs.github.io/few-shot-vid2vid/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepFake
This episode has been supported by Lambda. Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fahir. In an earlier episode, we covered a paper by the name Everybody Dance Now. In this stunning work, we could take a video of a professional dancer, then record a video of our own, let's be diplomatic, less beautiful moves, and then transfer the dancer's performance onto our own body in the video. We call this process motion transfer. Now, look at this new, also learning based technique that does something similar, where Ingo's a description of a pose, just one image of a target person, and on the other side, outcomes the proper animation of this character, according to our prescribed motions. Now, before you think that it means that we would need to draw and animate stick figures to use this, I will stress that this is not the case. There are many techniques that perform pose estimation, where we just insert a photo, or even a video, and it creates all these stick figures for us, that represent the pose that people are taking in these videos. This means that we can even have a video of someone dancing, and just one image of the target person, and the rest is history. Insanity. That is already amazing and very convenient, but this paper works with a video to video problem formulation, which is a concept that is more general than just generating movement. Way more. For instance, we can also specify the input video of us, than add one, or at least a few images of the target subject, and we can make them speak and behave using our gestures. This is already absolutely amazing. However, the more creative minds out there are already thinking that if we are thinking about images, it can be a painting as well, right? Yes, indeed, we can make the Mona Lisa speak with it as well. It can also take a labeled image. This is what you see here, where the colored and animated patches show the object boundaries for different object classes. Then, we take an input photo of a street scene, and we get photorealistic footage with all the cars, buildings, and vegetation. Now, make no mistake, some of these applications were possible before, many of which we showcased in previous videos, some of which you can see here, what is new and interesting here, is that we have just one architecture that can handle many of these tasks. Beyond that, this architecture requires much less data than previous techniques, as it often needs just one, or at most, a few images of the target subject to do all this magic. The paper is ample in comparison to these other methods. For instance, the FID measures the quality and the diversity of degenerated output images, and is subject to minimization, and you see that it is miles beyond these previous works. Some limitations also apply if the inputs stray too far away from topics that the neuron networks were trained on, we shouldn't expect results of this quality, and we are also dependent on proper inputs for the poses and segmentation maps for it to work well. The pace of progress in machine learning research is absolutely incredible, and we are getting very close to producing tools that can be actively used to empower artists working in the industry. What a time to be alive! If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to lambdalabs.com, slash papers, and sign up for one of their amazing GPU instances today. Our thanks to Lambda for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 3.04, "text": " This episode has been supported by Lambda."}, {"start": 3.04, "end": 6.96, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fahir."}, {"start": 6.96, "end": 11.76, "text": " In an earlier episode, we covered a paper by the name Everybody Dance Now."}, {"start": 11.76, "end": 15.76, "text": " In this stunning work, we could take a video of a professional dancer,"}, {"start": 15.76, "end": 21.44, "text": " then record a video of our own, let's be diplomatic, less beautiful moves,"}, {"start": 21.44, "end": 26.080000000000002, "text": " and then transfer the dancer's performance onto our own body in the video."}, {"start": 26.080000000000002, "end": 28.8, "text": " We call this process motion transfer."}, {"start": 28.8, "end": 34.32, "text": " Now, look at this new, also learning based technique that does something similar,"}, {"start": 34.32, "end": 39.36, "text": " where Ingo's a description of a pose, just one image of a target person,"}, {"start": 39.36, "end": 44.0, "text": " and on the other side, outcomes the proper animation of this character,"}, {"start": 44.0, "end": 46.24, "text": " according to our prescribed motions."}, {"start": 46.24, "end": 52.08, "text": " Now, before you think that it means that we would need to draw and animate stick figures to use this,"}, {"start": 52.08, "end": 54.400000000000006, "text": " I will stress that this is not the case."}, {"start": 54.4, "end": 59.6, "text": " There are many techniques that perform pose estimation, where we just insert a photo,"}, {"start": 59.6, "end": 63.519999999999996, "text": " or even a video, and it creates all these stick figures for us,"}, {"start": 63.519999999999996, "end": 67.03999999999999, "text": " that represent the pose that people are taking in these videos."}, {"start": 67.75999999999999, "end": 71.28, "text": " This means that we can even have a video of someone dancing,"}, {"start": 71.28, "end": 75.36, "text": " and just one image of the target person, and the rest is history."}, {"start": 76.0, "end": 77.03999999999999, "text": " Insanity."}, {"start": 77.03999999999999, "end": 80.16, "text": " That is already amazing and very convenient,"}, {"start": 80.16, "end": 84.16, "text": " but this paper works with a video to video problem formulation,"}, {"start": 84.16, "end": 88.0, "text": " which is a concept that is more general than just generating movement."}, {"start": 88.72, "end": 89.6, "text": " Way more."}, {"start": 89.6, "end": 93.2, "text": " For instance, we can also specify the input video of us,"}, {"start": 93.2, "end": 97.2, "text": " than add one, or at least a few images of the target subject,"}, {"start": 97.2, "end": 101.28, "text": " and we can make them speak and behave using our gestures."}, {"start": 101.92, "end": 104.32, "text": " This is already absolutely amazing."}, {"start": 104.32, "end": 109.84, "text": " However, the more creative minds out there are already thinking that if we are thinking about"}, {"start": 109.84, "end": 118.08, "text": " images, it can be a painting as well, right? Yes, indeed, we can make the Mona Lisa speak with it as well."}, {"start": 122.08, "end": 124.08, "text": " It can also take a labeled image."}, {"start": 124.08, "end": 129.68, "text": " This is what you see here, where the colored and animated patches show the object boundaries"}, {"start": 129.68, "end": 131.04, "text": " for different object classes."}, {"start": 131.84, "end": 135.04, "text": " Then, we take an input photo of a street scene,"}, {"start": 135.04, "end": 140.23999999999998, "text": " and we get photorealistic footage with all the cars, buildings, and vegetation."}, {"start": 141.04, "end": 145.6, "text": " Now, make no mistake, some of these applications were possible before,"}, {"start": 145.6, "end": 148.56, "text": " many of which we showcased in previous videos,"}, {"start": 148.56, "end": 152.48, "text": " some of which you can see here, what is new and interesting here,"}, {"start": 152.48, "end": 156.56, "text": " is that we have just one architecture that can handle many of these tasks."}, {"start": 157.12, "end": 161.76, "text": " Beyond that, this architecture requires much less data than previous techniques,"}, {"start": 161.76, "end": 168.23999999999998, "text": " as it often needs just one, or at most, a few images of the target subject to do all this magic."}, {"start": 168.95999999999998, "end": 172.32, "text": " The paper is ample in comparison to these other methods."}, {"start": 172.32, "end": 178.16, "text": " For instance, the FID measures the quality and the diversity of degenerated output images,"}, {"start": 178.16, "end": 183.2, "text": " and is subject to minimization, and you see that it is miles beyond these previous works."}, {"start": 183.84, "end": 189.12, "text": " Some limitations also apply if the inputs stray too far away from topics that the neuron"}, {"start": 189.12, "end": 192.96, "text": " networks were trained on, we shouldn't expect results of this quality,"}, {"start": 192.96, "end": 198.4, "text": " and we are also dependent on proper inputs for the poses and segmentation maps for it to work well."}, {"start": 198.88, "end": 203.28, "text": " The pace of progress in machine learning research is absolutely incredible,"}, {"start": 203.28, "end": 208.88, "text": " and we are getting very close to producing tools that can be actively used to empower artists"}, {"start": 208.88, "end": 212.24, "text": " working in the industry. What a time to be alive!"}, {"start": 212.24, "end": 217.52, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 217.52, "end": 223.28, "text": " check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos,"}, {"start": 223.28, "end": 227.76000000000002, "text": " and I'm happy to tell you that they are offering GPU cloud services as well."}, {"start": 227.76000000000002, "end": 234.8, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 234.8, "end": 240.16000000000003, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 240.16000000000003, "end": 246.96, "text": " And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS"}, {"start": 246.96, "end": 252.96, "text": " and Azure. Make sure to go to lambdalabs.com, slash papers, and sign up for one of their amazing"}, {"start": 252.96, "end": 258.0, "text": " GPU instances today. Our thanks to Lambda for helping us make better videos for you."}, {"start": 258.0, "end": 285.44, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=slJI5r9rltI
This AI Captures Your Hair Geometry...From Just One Photo! 👩‍🦱
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 Links to the paper "Dynamic Hair Modeling from Monocular Videos using Deep Neural Networks" are available here: http://www.cad.zju.edu.cn/home/zyy/docs/dynamic_hair.pdf http://www.kunzhou.net/2019/dynamic-hair-capture-sa19.pdf https://www.youyizheng.net/research.html ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #GameDev
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. In this series, we talk about research on all kinds of physics simulations, including fluids, collision physics, and we have even ventured into hair simulations. If you look here at this beautiful footage, you may be surprised to know how many moving parts a researcher has to get right to get something like this. For instance, some of these simulations have to go down to the level of computing the physics between individual hair strands. If it is done well, like what you see here from our earlier episode, these simulations will properly show us how things should move, but that's not all. There is also an abundance of research works out there on how they should look. And even then, we are not done because before that, we have to take a step back and somehow create these digital 3D models that show us the geometry of these flamboyant hairstyles. Approximately 300 episodes ago, we talked about a technique that took a photograph as an input and created a digital 3D model that we can use in our simulations and rendering systems. It had a really cool idea where it initially predicted a course result, and then this result was matched with the hair stars found in public data repositories and the closest match was presented to us. Clearly, this often meant that we get something that was similar to the photograph, but often not exactly the hairstyle we were seeking. And now, hold on to your papers because this work introduces a learning-based framework that can create a full reconstruction by itself without external help, and now squeeze that paper because it works not only for images, but for videos too. It works for shorter hairstyles, long hair, and even takes into consideration motion and external forces as well. The heart of the architecture behind this technique is this pair of neural networks where the one above creates the predicted hair geometry for each frame, while the other tries to look backwards in the data and try to predict the appropriate motions that should be present. Interestingly, it only needs two consecutive frames to make these predictions and adding more information does not seem to improve its results. That is very little data. Quite remarkable. Also, note that there are a lot of moving parts here in the full paper. For instance, this motion is first predicted in 2D and is then extrapolated to 3D afterwards. Let's have a look at this comparison. Indeed, it seems to produce smoother and more appealing results than this older technique. But if we look here, this other method seems even better, so what about that? Well, this method had access to multiple views of the model, which is significantly more information than what this new technique has that only needs a simple, monocular 2D video from our phone or from the internet. The fact that they are even comparable is absolutely amazing. If you have a look at the paper, you will see that it even contains a hair growing component in this architecture. And as you see, the progress in computer graphics research is absolutely amazing. And we are even being paid for this. Unreal. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. They offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made for AI, scientific computing and computer graphics projects. Exactly the kind of works you see here in this series. If you feel inspired by these works and you wish to run your experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. To spin up your own GPU instance and receive a $20 free credit, visit linode.com slash papers or click the link in the description and use the promo code Papers20 during Sina. Give it a try today. Thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 9.040000000000001, "text": " In this series, we talk about research on all kinds of physics simulations, including"}, {"start": 9.040000000000001, "end": 14.48, "text": " fluids, collision physics, and we have even ventured into hair simulations."}, {"start": 14.48, "end": 19.080000000000002, "text": " If you look here at this beautiful footage, you may be surprised to know how many moving"}, {"start": 19.080000000000002, "end": 22.96, "text": " parts a researcher has to get right to get something like this."}, {"start": 22.96, "end": 27.92, "text": " For instance, some of these simulations have to go down to the level of computing the physics"}, {"start": 27.92, "end": 30.32, "text": " between individual hair strands."}, {"start": 30.32, "end": 34.92, "text": " If it is done well, like what you see here from our earlier episode, these simulations"}, {"start": 34.92, "end": 39.08, "text": " will properly show us how things should move, but that's not all."}, {"start": 39.08, "end": 43.800000000000004, "text": " There is also an abundance of research works out there on how they should look."}, {"start": 43.800000000000004, "end": 49.480000000000004, "text": " And even then, we are not done because before that, we have to take a step back and somehow"}, {"start": 49.480000000000004, "end": 55.760000000000005, "text": " create these digital 3D models that show us the geometry of these flamboyant hairstyles."}, {"start": 55.76, "end": 61.36, "text": " Approximately 300 episodes ago, we talked about a technique that took a photograph as"}, {"start": 61.36, "end": 67.24, "text": " an input and created a digital 3D model that we can use in our simulations and rendering"}, {"start": 67.24, "end": 68.32, "text": " systems."}, {"start": 68.32, "end": 74.2, "text": " It had a really cool idea where it initially predicted a course result, and then this"}, {"start": 74.2, "end": 80.2, "text": " result was matched with the hair stars found in public data repositories and the closest"}, {"start": 80.2, "end": 82.44, "text": " match was presented to us."}, {"start": 82.44, "end": 88.03999999999999, "text": " Clearly, this often meant that we get something that was similar to the photograph, but often"}, {"start": 88.03999999999999, "end": 91.16, "text": " not exactly the hairstyle we were seeking."}, {"start": 91.16, "end": 96.44, "text": " And now, hold on to your papers because this work introduces a learning-based framework"}, {"start": 96.44, "end": 103.03999999999999, "text": " that can create a full reconstruction by itself without external help, and now squeeze that"}, {"start": 103.03999999999999, "end": 108.08, "text": " paper because it works not only for images, but for videos too."}, {"start": 108.08, "end": 114.36, "text": " It works for shorter hairstyles, long hair, and even takes into consideration motion and"}, {"start": 114.36, "end": 116.52, "text": " external forces as well."}, {"start": 116.52, "end": 120.92, "text": " The heart of the architecture behind this technique is this pair of neural networks where"}, {"start": 120.92, "end": 126.88, "text": " the one above creates the predicted hair geometry for each frame, while the other tries to look"}, {"start": 126.88, "end": 132.56, "text": " backwards in the data and try to predict the appropriate motions that should be present."}, {"start": 132.56, "end": 137.28, "text": " Interestingly, it only needs two consecutive frames to make these predictions and adding"}, {"start": 137.28, "end": 140.88, "text": " more information does not seem to improve its results."}, {"start": 140.88, "end": 143.08, "text": " That is very little data."}, {"start": 143.08, "end": 144.16, "text": " Quite remarkable."}, {"start": 144.16, "end": 148.4, "text": " Also, note that there are a lot of moving parts here in the full paper."}, {"start": 148.4, "end": 155.0, "text": " For instance, this motion is first predicted in 2D and is then extrapolated to 3D afterwards."}, {"start": 155.0, "end": 157.24, "text": " Let's have a look at this comparison."}, {"start": 157.24, "end": 163.44, "text": " Indeed, it seems to produce smoother and more appealing results than this older technique."}, {"start": 163.44, "end": 168.8, "text": " But if we look here, this other method seems even better, so what about that?"}, {"start": 168.8, "end": 173.68, "text": " Well, this method had access to multiple views of the model, which is significantly more"}, {"start": 173.68, "end": 179.4, "text": " information than what this new technique has that only needs a simple, monocular 2D video"}, {"start": 179.4, "end": 182.44, "text": " from our phone or from the internet."}, {"start": 182.44, "end": 186.76, "text": " The fact that they are even comparable is absolutely amazing."}, {"start": 186.76, "end": 191.64, "text": " If you have a look at the paper, you will see that it even contains a hair growing component"}, {"start": 191.64, "end": 193.12, "text": " in this architecture."}, {"start": 193.12, "end": 198.20000000000002, "text": " And as you see, the progress in computer graphics research is absolutely amazing."}, {"start": 198.20000000000002, "end": 200.32, "text": " And we are even being paid for this."}, {"start": 200.32, "end": 201.6, "text": " Unreal."}, {"start": 201.6, "end": 203.84, "text": " This episode has been supported by Linode."}, {"start": 203.84, "end": 207.72, "text": " Linode is the world's largest independent cloud computing provider."}, {"start": 207.72, "end": 214.04000000000002, "text": " They offer affordable GPU instances featuring the Quadro RTX 6000, which is tailor-made"}, {"start": 214.04000000000002, "end": 218.52, "text": " for AI, scientific computing and computer graphics projects."}, {"start": 218.52, "end": 221.8, "text": " Exactly the kind of works you see here in this series."}, {"start": 221.8, "end": 226.68, "text": " If you feel inspired by these works and you wish to run your experiments or deploy your"}, {"start": 226.68, "end": 231.64000000000001, "text": " already existing works through a simple and reliable hosting service, make sure to join"}, {"start": 231.64000000000001, "end": 236.0, "text": " over 800,000 other happy customers and choose Linode."}, {"start": 236.0, "end": 242.04000000000002, "text": " To spin up your own GPU instance and receive a $20 free credit, visit linode.com slash"}, {"start": 242.04000000000002, "end": 248.20000000000002, "text": " papers or click the link in the description and use the promo code Papers20 during Sina."}, {"start": 248.20000000000002, "end": 249.52, "text": " Give it a try today."}, {"start": 249.52, "end": 254.32000000000002, "text": " Thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 254.32, "end": 281.96, "text": " Thanks for watching and for your generous support and I'll see you next time."}]