CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
19
90
DESCRIPTION
stringlengths
475
4.65k
TRANSCRIPTION
stringlengths
0
20.1k
SEGMENTS
stringlengths
2
30.8k
Two Minute Papers
https://www.youtube.com/watch?v=zDTUbtmUbG8
This AI Makes Celebrities Old…For a Price! 👵
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Only a Matter of Style: Age Transformation Using a Style-based Regression Model" is available here: https://yuval-alaluf.github.io/SAM/ Demo: https://replicate.ai/yuval-alaluf/sam ▶️Our Twitter: https://twitter.com/twominutepapers 📝 Our material synthesis paper with the latent space: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejona Ifehir. Today, we are going to take a bunch of celebrities and imagine what they look like as tiny little babies. And then we will also make them old. And at the end of this video, I'll also step up to the plate and become a baby myself. So, what is this black magic here? Well, what you see here is a bunch of synthetic humans created by a learning-based technique called Stuylegan 3, which appeared this year in June 2021. It is a neural network-based learning algorithm that is capable of synthesizing these eye-poppingly detailed images of human beings that don't exist and even animate them. Now, how does it do all this black magic? Well, it takes walks in a latent space. What is that? A latent space is a made-up place where we are trying to organize data in a way that similar things are close to each other. In our earlier work, we were looking to generate hundreds of variants of a material model to populate this scene. In this latent space, we can concoct all of these really cool digital material models. A link to this work is available in the video description. Stuylegan uses walks in a similar latent space to create these human faces and animate them. And now, hold on to your papers because a latent space can represent not only materials or the head movement and smiles for people, but even better, age 2. You remember these amazing transformations from the intro? So, how does it do that? Well, similarly, to the material example, we can embed the source image into a latent space and take a path therein. It looks like this. Please remember this embedding step because we are going to refer to it in a moment. And now comes the twist. The latent space for this new method is built such that when we take these walks, it disentangles age from other attributes. This means that only the age changes and nothing else changes. This is very challenging to pull off because normally when we change our location in the latent space, not just one thing changes, everything changes. This was the case with the materials. But not with this method which can take photos of well-known celebrities and make them look younger or older. I kind of want to do this to myself too. So, you know what? Now, it's my turn. This is what BB Karoy might look like after reading BB papers. And this is Old Man Karoy complaining that papers were way better back in his day. And this is supposedly BB Karoy from a talk at a NATO conference. Look, apparently they let anybody in these days. Now, this is all well and good, but there is a price to be paid for this. So, what is the price? Let's find out together what that is. Here is the reference image of me. And here is how the transformations came out. Did you find the issue? Well, the issue is that I don't really look like this. Not only because the beard was synthesized onto my face by an earlier AI, but really, I can't really find my exact image in this. Take another look. This is what the input image looked like. Can you find it in the output somewhere? Not really. Same with the conference image. This is the actual original image of me. And this is the output of the AI. So, I can't find myself. Why is that? Now, you remember I mentioned earlier that we embed the source image into the latent space. And this step is, unfortunately, imperfect. We start out from not exactly the same image, but only something similar to it. This is the price to be paid for these amazing results. And with that, please remember to invoke the first law of papers. Which says, do not look at where we are. Look at where we will be. Two more papers down the line. Now, even better news, as of the writing of this episode, you can try it yourself. Now, be warned that our club of fellow scholars is growing rapidly, and you all are always so curious that we usually go over and crash these websites upon the publishing of these episodes. If that happens, please be patient. Otherwise, if you tried it, please let me know in the comments how it went or just tweet at me. I'd love to see some more baby scholars. What a time to be alive! This video has been supported by weights and biases. Check out the recent offering, fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from your fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejona Ifehir."}, {"start": 4.64, "end": 12.16, "text": " Today, we are going to take a bunch of celebrities and imagine what they look like as tiny little babies."}, {"start": 12.16, "end": 14.72, "text": " And then we will also make them old."}, {"start": 14.72, "end": 20.96, "text": " And at the end of this video, I'll also step up to the plate and become a baby myself."}, {"start": 20.96, "end": 23.44, "text": " So, what is this black magic here?"}, {"start": 23.44, "end": 27.04, "text": " Well, what you see here is a bunch of synthetic humans"}, {"start": 27.04, "end": 34.72, "text": " created by a learning-based technique called Stuylegan 3, which appeared this year in June 2021."}, {"start": 34.72, "end": 40.8, "text": " It is a neural network-based learning algorithm that is capable of synthesizing these eye-poppingly"}, {"start": 40.8, "end": 47.04, "text": " detailed images of human beings that don't exist and even animate them."}, {"start": 47.04, "end": 50.08, "text": " Now, how does it do all this black magic?"}, {"start": 50.08, "end": 53.36, "text": " Well, it takes walks in a latent space."}, {"start": 53.36, "end": 55.519999999999996, "text": " What is that?"}, {"start": 55.519999999999996, "end": 64.08, "text": " A latent space is a made-up place where we are trying to organize data in a way that similar things are close to each other."}, {"start": 64.08, "end": 71.12, "text": " In our earlier work, we were looking to generate hundreds of variants of a material model to populate this scene."}, {"start": 71.12, "end": 76.72, "text": " In this latent space, we can concoct all of these really cool digital material models."}, {"start": 76.72, "end": 79.92, "text": " A link to this work is available in the video description."}, {"start": 79.92, "end": 86.56, "text": " Stuylegan uses walks in a similar latent space to create these human faces and animate them."}, {"start": 86.56, "end": 94.72, "text": " And now, hold on to your papers because a latent space can represent not only materials or the head movement"}, {"start": 94.72, "end": 99.52000000000001, "text": " and smiles for people, but even better, age 2."}, {"start": 99.52000000000001, "end": 102.88, "text": " You remember these amazing transformations from the intro?"}, {"start": 102.88, "end": 105.2, "text": " So, how does it do that?"}, {"start": 105.2, "end": 114.0, "text": " Well, similarly, to the material example, we can embed the source image into a latent space and take a path therein."}, {"start": 114.0, "end": 115.76, "text": " It looks like this."}, {"start": 115.76, "end": 120.8, "text": " Please remember this embedding step because we are going to refer to it in a moment."}, {"start": 120.8, "end": 122.80000000000001, "text": " And now comes the twist."}, {"start": 122.80000000000001, "end": 131.04, "text": " The latent space for this new method is built such that when we take these walks, it disentangles age from other attributes."}, {"start": 131.04, "end": 136.07999999999998, "text": " This means that only the age changes and nothing else changes."}, {"start": 136.07999999999998, "end": 142.72, "text": " This is very challenging to pull off because normally when we change our location in the latent space,"}, {"start": 142.72, "end": 146.16, "text": " not just one thing changes, everything changes."}, {"start": 146.16, "end": 148.48, "text": " This was the case with the materials."}, {"start": 148.48, "end": 156.32, "text": " But not with this method which can take photos of well-known celebrities and make them look younger or older."}, {"start": 156.32, "end": 159.2, "text": " I kind of want to do this to myself too."}, {"start": 159.2, "end": 160.95999999999998, "text": " So, you know what?"}, {"start": 160.95999999999998, "end": 162.64, "text": " Now, it's my turn."}, {"start": 162.64, "end": 167.76, "text": " This is what BB Karoy might look like after reading BB papers."}, {"start": 167.76, "end": 174.16, "text": " And this is Old Man Karoy complaining that papers were way better back in his day."}, {"start": 174.16, "end": 180.23999999999998, "text": " And this is supposedly BB Karoy from a talk at a NATO conference."}, {"start": 180.23999999999998, "end": 183.67999999999998, "text": " Look, apparently they let anybody in these days."}, {"start": 183.67999999999998, "end": 188.56, "text": " Now, this is all well and good, but there is a price to be paid for this."}, {"start": 188.56, "end": 190.56, "text": " So, what is the price?"}, {"start": 190.56, "end": 193.12, "text": " Let's find out together what that is."}, {"start": 193.12, "end": 195.52, "text": " Here is the reference image of me."}, {"start": 195.52, "end": 198.8, "text": " And here is how the transformations came out."}, {"start": 198.8, "end": 200.64000000000001, "text": " Did you find the issue?"}, {"start": 200.64000000000001, "end": 204.08, "text": " Well, the issue is that I don't really look like this."}, {"start": 204.08, "end": 209.52, "text": " Not only because the beard was synthesized onto my face by an earlier AI,"}, {"start": 209.52, "end": 213.52, "text": " but really, I can't really find my exact image in this."}, {"start": 213.52, "end": 214.96, "text": " Take another look."}, {"start": 214.96, "end": 217.52, "text": " This is what the input image looked like."}, {"start": 217.52, "end": 220.48000000000002, "text": " Can you find it in the output somewhere?"}, {"start": 220.48000000000002, "end": 221.68, "text": " Not really."}, {"start": 221.68, "end": 223.52, "text": " Same with the conference image."}, {"start": 223.52, "end": 226.48000000000002, "text": " This is the actual original image of me."}, {"start": 226.48000000000002, "end": 229.28, "text": " And this is the output of the AI."}, {"start": 229.28, "end": 231.84, "text": " So, I can't find myself."}, {"start": 231.84, "end": 233.36, "text": " Why is that?"}, {"start": 233.36, "end": 240.16000000000003, "text": " Now, you remember I mentioned earlier that we embed the source image into the latent space."}, {"start": 240.16000000000003, "end": 243.68, "text": " And this step is, unfortunately, imperfect."}, {"start": 243.68, "end": 249.68, "text": " We start out from not exactly the same image, but only something similar to it."}, {"start": 249.68, "end": 253.04000000000002, "text": " This is the price to be paid for these amazing results."}, {"start": 253.04000000000002, "end": 257.52, "text": " And with that, please remember to invoke the first law of papers."}, {"start": 257.52, "end": 260.16, "text": " Which says, do not look at where we are."}, {"start": 260.16, "end": 261.76, "text": " Look at where we will be."}, {"start": 261.76, "end": 263.76, "text": " Two more papers down the line."}, {"start": 263.76, "end": 267.68, "text": " Now, even better news, as of the writing of this episode,"}, {"start": 267.68, "end": 269.6, "text": " you can try it yourself."}, {"start": 269.6, "end": 274.0, "text": " Now, be warned that our club of fellow scholars is growing rapidly,"}, {"start": 274.0, "end": 277.68, "text": " and you all are always so curious that we usually go over"}, {"start": 277.68, "end": 281.76000000000005, "text": " and crash these websites upon the publishing of these episodes."}, {"start": 281.76000000000005, "end": 284.16, "text": " If that happens, please be patient."}, {"start": 284.16, "end": 288.0, "text": " Otherwise, if you tried it, please let me know in the comments how it went"}, {"start": 288.0, "end": 289.52000000000004, "text": " or just tweet at me."}, {"start": 289.52000000000004, "end": 291.92, "text": " I'd love to see some more baby scholars."}, {"start": 291.92, "end": 293.36, "text": " What a time to be alive!"}, {"start": 294.08000000000004, "end": 297.44, "text": " This video has been supported by weights and biases."}, {"start": 297.44, "end": 300.56, "text": " Check out the recent offering, fully connected,"}, {"start": 300.56, "end": 304.08, "text": " a place where they bring machine learning practitioners together"}, {"start": 304.08, "end": 306.8, "text": " to share and discuss their ideas,"}, {"start": 306.8, "end": 308.8, "text": " learn from industry leaders,"}, {"start": 308.8, "end": 312.0, "text": " and even collaborate on projects together."}, {"start": 312.0, "end": 315.44, "text": " You see, I get messages from your fellow scholars telling me"}, {"start": 315.44, "end": 318.15999999999997, "text": " that you have been inspired by the series,"}, {"start": 318.15999999999997, "end": 320.64, "text": " but don't really know where to start."}, {"start": 321.2, "end": 322.8, "text": " And here it is."}, {"start": 322.8, "end": 326.4, "text": " Fully connected is a great way to learn about the fundamentals,"}, {"start": 326.4, "end": 328.56, "text": " how to reproduce experiments,"}, {"start": 328.56, "end": 331.44, "text": " get your papers accepted to a conference,"}, {"start": 331.44, "end": 332.4, "text": " and more."}, {"start": 332.4, "end": 337.03999999999996, "text": " Make sure to visit them through wnb.me slash papers"}, {"start": 337.03999999999996, "end": 339.76, "text": " or just click the link in the video description."}, {"start": 339.76, "end": 343.35999999999996, "text": " Our thanks to weights and biases for their longstanding support"}, {"start": 343.35999999999996, "end": 346.23999999999995, "text": " and for helping us make better videos for you."}, {"start": 346.23999999999995, "end": 348.47999999999996, "text": " Thanks for watching and for your generous support,"}, {"start": 348.48, "end": 359.6, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=8YOpFsZsR9w
Simulating A Virtual World…For A Thousand Years! 🤯
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Synthetic Silviculture: Multi-scale Modeling of Plant Ecosystems" is available here: https://storage.googleapis.com/pirk.io/projects/synthetic_silviculture/index.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to simulate thousands of years of vegetation in a virtual world. And I am telling you, this paper is unbelievable. Now, normally, if we are building a virtual world, we don't really think about simulating a physics and biology-based ecosystem. Let's be honest, it's more like, yeah, just throw in some trees, and we are done here. But, in reality, the kinds of vegetation we have in a virtual world should be at the very least a function of precipitation and temperature. And here, at this point, we know that this paper means business. You see, if there is no rain and it's super cold, we get a tundra. With no rain and high temperature, we get a desert. And if we keep the temperature high and add a ton of precipitation, we get a tropical rainforest. And this technique promises to be able to simulate these and everything in between. See these beautiful little worlds here? These are not illustrations. No, no, these are already the result of the simulation program. Nice. Now, let's run a tiny simulation with 400 years and a few hundred plants. Step number one, the first few decades are dominated by these shrubs blocking away the sunlight from everyone else. But, over time, step two, watch these resilient pine trees slowly overtake them and deny them their precious sunlight. And what happens as a result? Look, they're downfall, brings forth a complete ecosystem change. And then, step three, screw trees start to appear. This changes the game. Why? Well, these are more shade tolerant and let's see. Yes, they take over the ecosystem from the pine trees. A beautifully done story in such a little simulation. I absolutely love it. Of course, any suffering-specting virtual world will also contain other objects, not just the vegetation, and the simulator says no problem. Just chuck them in there and we'll just react accordingly and grow around it. Now, let's see a mid-size simulation. Look at that. Imagine the previous story with the pine trees, but with not a few hundred, but a hundred thousand plants. This can simulate that too. And now comes the final boss, a large simulation. Half a million plants and more than a thousand years. Yes, really. Let's see the story here in images first and then you will see the full simulation. Yes, over the first hundred years, fast growing shrubs dominate and start growing everywhere. And after a few hundred more years, the slower growing trees catch up and start overshadowing the shrubs at lower elevation levels. And then more kinds of trees appear at lower elevations and slowly over 1400 years, a beautiful mixed-age forest emerges. I shiver just thinking about the fact that through the power of computer graphics research works, we can simulate all this on a computer. What a time to be alive. And while we look at the full simulation, please note that there is more happening here, make sure to have a look at the paper in the video description if you wish to know more details. Now about the paper. And here comes an additional interesting part. As of the making of this video, this paper has been referred to 20 times. Yes, but this paper is from 2019, so it had years to soak up some citations, which didn't really happen. Now note that one citations are not everything, and two, 20 citations for a paper in the field of computer graphics is not bad at all. But every time I see an amazing paper, I really wish that more people would hear about it, and always I find that almost nobody knows about them. And once again, this is why I started to make two-minute papers. Thank you so much for coming on this amazing journey with me. Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. And it even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and thus all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com, slash papers, and start using their system for free today. Thanks to perceptilebs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.5600000000000005, "end": 10.32, "text": " Today we are going to simulate thousands of years of vegetation in a virtual world."}, {"start": 10.32, "end": 14.4, "text": " And I am telling you, this paper is unbelievable."}, {"start": 15.040000000000001, "end": 20.56, "text": " Now, normally, if we are building a virtual world, we don't really think about simulating"}, {"start": 20.56, "end": 27.2, "text": " a physics and biology-based ecosystem. Let's be honest, it's more like, yeah, just throw"}, {"start": 27.2, "end": 33.6, "text": " in some trees, and we are done here. But, in reality, the kinds of vegetation we have in a virtual"}, {"start": 33.6, "end": 41.2, "text": " world should be at the very least a function of precipitation and temperature. And here, at this"}, {"start": 41.2, "end": 48.08, "text": " point, we know that this paper means business. You see, if there is no rain and it's super cold,"}, {"start": 48.08, "end": 54.96, "text": " we get a tundra. With no rain and high temperature, we get a desert. And if we keep the temperature"}, {"start": 54.96, "end": 61.6, "text": " high and add a ton of precipitation, we get a tropical rainforest. And this technique promises"}, {"start": 61.6, "end": 67.76, "text": " to be able to simulate these and everything in between. See these beautiful little worlds here?"}, {"start": 68.48, "end": 74.8, "text": " These are not illustrations. No, no, these are already the result of the simulation program."}, {"start": 75.52, "end": 82.08, "text": " Nice. Now, let's run a tiny simulation with 400 years and a few hundred plants."}, {"start": 82.08, "end": 88.48, "text": " Step number one, the first few decades are dominated by these shrubs blocking away the sunlight"}, {"start": 88.48, "end": 96.88, "text": " from everyone else. But, over time, step two, watch these resilient pine trees slowly overtake them"}, {"start": 96.88, "end": 103.6, "text": " and deny them their precious sunlight. And what happens as a result? Look, they're downfall,"}, {"start": 103.6, "end": 110.88, "text": " brings forth a complete ecosystem change. And then, step three, screw trees start to appear."}, {"start": 110.88, "end": 118.32, "text": " This changes the game. Why? Well, these are more shade tolerant and let's see."}, {"start": 118.32, "end": 125.67999999999999, "text": " Yes, they take over the ecosystem from the pine trees. A beautifully done story in such a little"}, {"start": 125.67999999999999, "end": 132.96, "text": " simulation. I absolutely love it. Of course, any suffering-specting virtual world will also contain"}, {"start": 132.96, "end": 139.51999999999998, "text": " other objects, not just the vegetation, and the simulator says no problem. Just chuck them in"}, {"start": 139.52, "end": 147.12, "text": " there and we'll just react accordingly and grow around it. Now, let's see a mid-size simulation."}, {"start": 147.92000000000002, "end": 154.4, "text": " Look at that. Imagine the previous story with the pine trees, but with not a few hundred,"}, {"start": 154.4, "end": 162.0, "text": " but a hundred thousand plants. This can simulate that too. And now comes the final boss,"}, {"start": 162.0, "end": 169.92, "text": " a large simulation. Half a million plants and more than a thousand years. Yes, really. Let's see"}, {"start": 169.92, "end": 176.72, "text": " the story here in images first and then you will see the full simulation. Yes, over the first"}, {"start": 176.72, "end": 183.04, "text": " hundred years, fast growing shrubs dominate and start growing everywhere. And after a few hundred"}, {"start": 183.04, "end": 189.6, "text": " more years, the slower growing trees catch up and start overshadowing the shrubs at lower elevation"}, {"start": 189.6, "end": 198.4, "text": " levels. And then more kinds of trees appear at lower elevations and slowly over 1400 years,"}, {"start": 198.4, "end": 206.07999999999998, "text": " a beautiful mixed-age forest emerges. I shiver just thinking about the fact that through the power"}, {"start": 206.07999999999998, "end": 212.79999999999998, "text": " of computer graphics research works, we can simulate all this on a computer. What a time to be alive."}, {"start": 212.79999999999998, "end": 217.68, "text": " And while we look at the full simulation, please note that there is more happening here,"}, {"start": 217.68, "end": 222.8, "text": " make sure to have a look at the paper in the video description if you wish to know more details."}, {"start": 222.8, "end": 229.12, "text": " Now about the paper. And here comes an additional interesting part. As of the making of this video,"}, {"start": 229.12, "end": 237.20000000000002, "text": " this paper has been referred to 20 times. Yes, but this paper is from 2019, so it had years to soak"}, {"start": 237.20000000000002, "end": 244.24, "text": " up some citations, which didn't really happen. Now note that one citations are not everything,"}, {"start": 244.24, "end": 250.4, "text": " and two, 20 citations for a paper in the field of computer graphics is not bad at all. But"}, {"start": 250.88, "end": 256.32, "text": " every time I see an amazing paper, I really wish that more people would hear about it,"}, {"start": 256.32, "end": 263.52, "text": " and always I find that almost nobody knows about them. And once again, this is why I started to make"}, {"start": 263.52, "end": 269.68, "text": " two-minute papers. Thank you so much for coming on this amazing journey with me. Perceptilebs"}, {"start": 269.68, "end": 276.88, "text": " is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible."}, {"start": 276.88, "end": 282.56, "text": " This gives you a faster way to build out models with more transparency into how your model is"}, {"start": 282.56, "end": 290.0, "text": " architected, how it performs, and how to debug it. And it even generates visualizations for all"}, {"start": 290.0, "end": 296.72, "text": " the model variables, and gives you recommendations both during modeling and training, and thus all"}, {"start": 296.72, "end": 303.12, "text": " this automatically. I only wish I had a tool like this when I was working on my neural networks"}, {"start": 303.12, "end": 310.48, "text": " during my PhD years. Visit perceptilebs.com, slash papers, and start using their system for free"}, {"start": 310.48, "end": 316.56, "text": " today. Thanks to perceptilebs for their support, and for helping us make better videos for you."}, {"start": 316.56, "end": 329.92, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=3wHbeq61Wn0
Man VS Machine: Who Plays Table Tennis Better? 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Optimal Stroke Learning with Policy Gradient Approach for Robotic Table Tennis" is available here: https://arxiv.org/abs/2109.03100 https://www.youtube.com/watch?v=SNnqtGLmX4Y ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojejona Ifeher. Today we are going to see if a robot can learn to play table tennis. Spoiler alert, the answer is yes, quite well in fact. That is surprising, but what is even more surprising is how quickly it learned to do that. Recently we have seen a growing number of techniques where robots learn in a computer simulation and then get deployed into the real world. Yes, that sounds like science fiction. So, does this work in practice? Well, I'll give you two recent examples and you can decide for yourself. What you see here is example number one where open AI's robot hand learned to dexterously rotate this ruby cube to a given target state. How did it do it? Yes, you guessed it right, it learned in a simulation. However, no simulation is as detailed as the real world, so they used a technique called automatic domain randomization in which they create a large number of random environments, each of which are a little different. And the AI is meant to learn how to solve many different variants of the same problem. And the result? Did it learn general knowledge from that? Yes, what's more, this became not only a dexterous robot hand that can execute these rotations, but we can make up creative ways to torment this little machine and it still stood its ground. Okay, so this works, but is this concept good enough for commercial applications? You bet. Example number two, Tesla uses no less than a simulated game world to train their self-driving cars. For instance, when we are in this synthetic video game, it is suddenly much easier to teach the algorithm safely. You can also make any scenario easier, harder, replace a car with a dog or a pack of dogs, and make many similar examples so that the AI can learn from these what if situations as much as possible. Now, all that's great, but today we are going to see whether this concept can be generalized to playing table tennis. And I have to be honest, I am very enthused, but a little skeptical too. This task requires finesse, rapid movement, and predicting what is about to happen in the near future. It really is the whole package, isn't it? Now, let's enter the training simulation and see how it goes. First, we hit the ball over to its side, specify a desired return position, and ask it to practice returning the ball around this desired position. Then, after a quick retraining step against the ball throwing machine, we observe the first amazing thing. You see, the first cool thing here is that it practices against side spin and top spin balls. What are those? These are techniques where the players hit the ball in ways to make their trajectory much more difficult to predict. Okay, enough of this. Now, hold on to your papers and let's see how the final version of the AI fares against a player. And, whoa! It really made the transition into the real world. Look at that. This seems like it could go on forever. Let's watch for a few seconds. Yep, still going. Still going. But, we are not done yet, not even close. We said at the start of the video that this training is quick. How quick? Well, if you have been holding on to your papers, now squeeze that paper because all the robot took was one and a half hours of training. And wait, there are two more mind-blowing numbers here. It can return 98% of the balls and most of them are within 25 centimeters or about 10 inches of the desired spot. And, again, great news. This is also one of those techniques that does not require Google or OpenAI-level resources to make something really amazing. And, you know, this is the way to make an excellent excuse to play table tennis during work hours. They really made it work. Huge congratulations to the team. Now, of course, not even this technique is perfect. We noted that it can handle side spin and top spin balls, but it can deal with backspin balls yet because get this. Quoting, it causes too much acceleration in a robot joint. Yes, a robot with joint pain. What a time to be alive. Now, one more thing, as of the making of this video, this was seen by a grand total of 54 people. Again, there is a real possibility that if we don't talk about this amazing work, no one will. And this is why I started two-minute papers. Thank you very much for coming on this journey with me. Please subscribe if you wish to see more of these. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojejona Ifeher."}, {"start": 5.0, "end": 10.48, "text": " Today we are going to see if a robot can learn to play table tennis."}, {"start": 10.48, "end": 14.8, "text": " Spoiler alert, the answer is yes, quite well in fact."}, {"start": 14.8, "end": 22.0, "text": " That is surprising, but what is even more surprising is how quickly it learned to do that."}, {"start": 22.0, "end": 28.5, "text": " Recently we have seen a growing number of techniques where robots learn in a computer simulation"}, {"start": 28.5, "end": 32.5, "text": " and then get deployed into the real world."}, {"start": 32.5, "end": 35.5, "text": " Yes, that sounds like science fiction."}, {"start": 35.5, "end": 38.1, "text": " So, does this work in practice?"}, {"start": 38.1, "end": 43.3, "text": " Well, I'll give you two recent examples and you can decide for yourself."}, {"start": 43.3, "end": 53.0, "text": " What you see here is example number one where open AI's robot hand learned to dexterously rotate this ruby cube to a given target state."}, {"start": 53.0, "end": 54.5, "text": " How did it do it?"}, {"start": 54.5, "end": 58.2, "text": " Yes, you guessed it right, it learned in a simulation."}, {"start": 58.2, "end": 73.2, "text": " However, no simulation is as detailed as the real world, so they used a technique called automatic domain randomization in which they create a large number of random environments, each of which are a little different."}, {"start": 73.2, "end": 80.2, "text": " And the AI is meant to learn how to solve many different variants of the same problem."}, {"start": 80.2, "end": 84.2, "text": " And the result? Did it learn general knowledge from that?"}, {"start": 84.2, "end": 99.2, "text": " Yes, what's more, this became not only a dexterous robot hand that can execute these rotations, but we can make up creative ways to torment this little machine and it still stood its ground."}, {"start": 99.2, "end": 105.7, "text": " Okay, so this works, but is this concept good enough for commercial applications?"}, {"start": 105.7, "end": 114.7, "text": " You bet. Example number two, Tesla uses no less than a simulated game world to train their self-driving cars."}, {"start": 114.7, "end": 122.7, "text": " For instance, when we are in this synthetic video game, it is suddenly much easier to teach the algorithm safely."}, {"start": 122.7, "end": 138.7, "text": " You can also make any scenario easier, harder, replace a car with a dog or a pack of dogs, and make many similar examples so that the AI can learn from these what if situations as much as possible."}, {"start": 138.7, "end": 147.7, "text": " Now, all that's great, but today we are going to see whether this concept can be generalized to playing table tennis."}, {"start": 147.7, "end": 153.7, "text": " And I have to be honest, I am very enthused, but a little skeptical too."}, {"start": 153.7, "end": 161.7, "text": " This task requires finesse, rapid movement, and predicting what is about to happen in the near future."}, {"start": 161.7, "end": 163.7, "text": " It really is the whole package, isn't it?"}, {"start": 163.7, "end": 168.7, "text": " Now, let's enter the training simulation and see how it goes."}, {"start": 168.7, "end": 179.7, "text": " First, we hit the ball over to its side, specify a desired return position, and ask it to practice returning the ball around this desired position."}, {"start": 179.7, "end": 186.7, "text": " Then, after a quick retraining step against the ball throwing machine, we observe the first amazing thing."}, {"start": 186.7, "end": 193.7, "text": " You see, the first cool thing here is that it practices against side spin and top spin balls."}, {"start": 193.7, "end": 201.7, "text": " What are those? These are techniques where the players hit the ball in ways to make their trajectory much more difficult to predict."}, {"start": 201.7, "end": 211.7, "text": " Okay, enough of this. Now, hold on to your papers and let's see how the final version of the AI fares against a player."}, {"start": 211.7, "end": 217.7, "text": " And, whoa! It really made the transition into the real world."}, {"start": 217.7, "end": 224.7, "text": " Look at that. This seems like it could go on forever. Let's watch for a few seconds."}, {"start": 224.7, "end": 229.7, "text": " Yep, still going. Still going."}, {"start": 229.7, "end": 236.7, "text": " But, we are not done yet, not even close. We said at the start of the video that this training is quick."}, {"start": 236.7, "end": 246.7, "text": " How quick? Well, if you have been holding on to your papers, now squeeze that paper because all the robot took was one and a half hours of training."}, {"start": 246.7, "end": 261.7, "text": " And wait, there are two more mind-blowing numbers here. It can return 98% of the balls and most of them are within 25 centimeters or about 10 inches of the desired spot."}, {"start": 261.7, "end": 273.7, "text": " And, again, great news. This is also one of those techniques that does not require Google or OpenAI-level resources to make something really amazing."}, {"start": 273.7, "end": 283.7, "text": " And, you know, this is the way to make an excellent excuse to play table tennis during work hours. They really made it work."}, {"start": 283.7, "end": 298.7, "text": " Huge congratulations to the team. Now, of course, not even this technique is perfect. We noted that it can handle side spin and top spin balls, but it can deal with backspin balls yet because get this."}, {"start": 298.7, "end": 316.7, "text": " Quoting, it causes too much acceleration in a robot joint. Yes, a robot with joint pain. What a time to be alive. Now, one more thing, as of the making of this video, this was seen by a grand total of 54 people."}, {"start": 316.7, "end": 331.7, "text": " Again, there is a real possibility that if we don't talk about this amazing work, no one will. And this is why I started two-minute papers. Thank you very much for coming on this journey with me. Please subscribe if you wish to see more of these."}, {"start": 331.7, "end": 355.7, "text": " This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 355.7, "end": 369.7, "text": " Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers."}, {"start": 369.7, "end": 376.7, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 376.7, "end": 386.7, "text": " Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=qeSoAbJoi7c
Can A Virtual Sponge Sink? 🧽
❤️ Check out Weights & Biases and say hi in their community forum here: https://wandb.me/paperforum 📝 The paper "Unified particle system for multiple-fluid flow and porous material" is available here: https://cronfa.swansea.ac.uk/Record/cronfa57521 https://dl.acm.org/doi/10.1145/3450626.3459764 Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karojola Ifehir. Today, we are going to not only simulate fluids, but even better, we are going to absorb them with sponge-like porous materials too. This paper does not have the usual super-high resolution results that you see with many other simulation papers. However, I really wanted to show you what it can do through four lovely experiments. And at the end of the video, you will see how slowly or quickly it runs. Experiment number one, spongy dumbbells. One side is made of spongy material and as it starts absorbing the water, let's see if it sinks. Well, it slowly starts to descend as it gets heavier and then it gets so heavy that eventually it sinks the other half too. That is a good start. Now, experiment number two, absorption. Here, the green liquid is hard to absorb, therefore we expect it to pass through while the red liquid is easier to absorb and should get stuck in this perforated material. Let's see if that is indeed what happens. The green is coming through and where is all the red glue? Well, most of it is getting absorbed here. But wait, the paper promises to ensure that mass and momentum still gets transferred through the interactions properly. So, if the fluid volumes are simulated correctly, we expect a lot more green down there than red. Indeed we get it. Oh yes, checkmark. And if you think this was a great absorption simulation, wait until you see this. Experiment number three, absorption on steroids. This is a liquid mixture and these are porous materials, each of which will absorb exactly one component of the liquid and let through the rest. Let's see, the first obstacle absorbs the blue component quickly, the second retains all the green component and the red flows through. And look at how gracefully the red fluid feels the last punch. Lovely. Onwards to experiment number four, artistic control. Since we are building our own little virtual world, we make all the rules. So, let's see the first absorption case. Nothing too crazy here. But if this is not in line with our artistic vision, do not despair because this is our world, so we can play with these physical parameters. For instance, let's increase the absorption rate so the fluid enters the solid faster. Or we can increase the permeability. This is the ease of passage of the fluid through the material. Does it transfer quicker into the sponge? Yes, it does. And finally, we can speed up both how quickly the fluid enters the solid, how quickly it travels within, and we have also increased the amount of absorption. Let's see if we end up with a smaller remaining volume. Indeed we do. The beauty of building these virtual worlds is that we can have these simulations under our artistic control. So, how long do we have to wait for the sponges? Well, hold on to your papers, because we not only don't have to sit down and watch an episode of SpongeBob to get these done, but get this. The crazy three sponge experiment used about a quarter of a million particles for the fluid and another quarter-ish million particles for the solids and runs at approximately a second per frame. Yes, it runs interactively and all this can be done on a consumer graphics card. And yes, that's why the resolution of the simulations is not that high. The authors could have posted a much higher resolution simulation and kept the execution time in the minutes per frame domain, but no. They wanted to show us what this can do interactively. And in this case, it is important that you apply the first law of papers which says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. So, there we go. I really wanted to show you this paper because unfortunately, if we don't talk about it, almost no one will see it. And this is why two minute papers exist. And if you wish to discuss this paper, make sure to drop by on our Discord server. The link is available in the video description. This video has been supported by weights and biases. Look at this. They have a great community forum that aims to make you the best machine learning engineer you can be. You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. In this forum, you can share your projects, ask for advice, look for collaborators and more. Make sure to visit wmb.me slash paper forum and say hi or just click the link in the video description. Thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojola Ifehir."}, {"start": 5.0, "end": 15.0, "text": " Today, we are going to not only simulate fluids, but even better, we are going to absorb them with sponge-like porous materials too."}, {"start": 15.0, "end": 22.0, "text": " This paper does not have the usual super-high resolution results that you see with many other simulation papers."}, {"start": 22.0, "end": 28.0, "text": " However, I really wanted to show you what it can do through four lovely experiments."}, {"start": 28.0, "end": 34.0, "text": " And at the end of the video, you will see how slowly or quickly it runs."}, {"start": 34.0, "end": 38.0, "text": " Experiment number one, spongy dumbbells."}, {"start": 38.0, "end": 46.0, "text": " One side is made of spongy material and as it starts absorbing the water, let's see if it sinks."}, {"start": 46.0, "end": 56.0, "text": " Well, it slowly starts to descend as it gets heavier and then it gets so heavy that eventually it sinks the other half too."}, {"start": 56.0, "end": 58.0, "text": " That is a good start."}, {"start": 58.0, "end": 62.0, "text": " Now, experiment number two, absorption."}, {"start": 62.0, "end": 74.0, "text": " Here, the green liquid is hard to absorb, therefore we expect it to pass through while the red liquid is easier to absorb and should get stuck in this perforated material."}, {"start": 74.0, "end": 77.0, "text": " Let's see if that is indeed what happens."}, {"start": 77.0, "end": 81.0, "text": " The green is coming through and where is all the red glue?"}, {"start": 81.0, "end": 85.0, "text": " Well, most of it is getting absorbed here."}, {"start": 85.0, "end": 94.0, "text": " But wait, the paper promises to ensure that mass and momentum still gets transferred through the interactions properly."}, {"start": 94.0, "end": 101.0, "text": " So, if the fluid volumes are simulated correctly, we expect a lot more green down there than red."}, {"start": 101.0, "end": 103.0, "text": " Indeed we get it."}, {"start": 103.0, "end": 105.0, "text": " Oh yes, checkmark."}, {"start": 105.0, "end": 116.0, "text": " And if you think this was a great absorption simulation, wait until you see this. Experiment number three, absorption on steroids."}, {"start": 116.0, "end": 127.0, "text": " This is a liquid mixture and these are porous materials, each of which will absorb exactly one component of the liquid and let through the rest."}, {"start": 127.0, "end": 137.0, "text": " Let's see, the first obstacle absorbs the blue component quickly, the second retains all the green component and the red flows through."}, {"start": 137.0, "end": 144.0, "text": " And look at how gracefully the red fluid feels the last punch."}, {"start": 144.0, "end": 145.0, "text": " Lovely."}, {"start": 145.0, "end": 150.0, "text": " Onwards to experiment number four, artistic control."}, {"start": 150.0, "end": 158.0, "text": " Since we are building our own little virtual world, we make all the rules. So, let's see the first absorption case."}, {"start": 158.0, "end": 160.0, "text": " Nothing too crazy here."}, {"start": 160.0, "end": 170.0, "text": " But if this is not in line with our artistic vision, do not despair because this is our world, so we can play with these physical parameters."}, {"start": 170.0, "end": 178.0, "text": " For instance, let's increase the absorption rate so the fluid enters the solid faster."}, {"start": 178.0, "end": 186.0, "text": " Or we can increase the permeability. This is the ease of passage of the fluid through the material."}, {"start": 186.0, "end": 189.0, "text": " Does it transfer quicker into the sponge?"}, {"start": 189.0, "end": 191.0, "text": " Yes, it does."}, {"start": 191.0, "end": 203.0, "text": " And finally, we can speed up both how quickly the fluid enters the solid, how quickly it travels within, and we have also increased the amount of absorption."}, {"start": 203.0, "end": 208.0, "text": " Let's see if we end up with a smaller remaining volume. Indeed we do."}, {"start": 208.0, "end": 215.0, "text": " The beauty of building these virtual worlds is that we can have these simulations under our artistic control."}, {"start": 215.0, "end": 218.0, "text": " So, how long do we have to wait for the sponges?"}, {"start": 218.0, "end": 228.0, "text": " Well, hold on to your papers, because we not only don't have to sit down and watch an episode of SpongeBob to get these done, but get this."}, {"start": 228.0, "end": 242.0, "text": " The crazy three sponge experiment used about a quarter of a million particles for the fluid and another quarter-ish million particles for the solids and runs at approximately a second per frame."}, {"start": 242.0, "end": 249.0, "text": " Yes, it runs interactively and all this can be done on a consumer graphics card."}, {"start": 249.0, "end": 264.0, "text": " And yes, that's why the resolution of the simulations is not that high. The authors could have posted a much higher resolution simulation and kept the execution time in the minutes per frame domain, but no."}, {"start": 264.0, "end": 268.0, "text": " They wanted to show us what this can do interactively."}, {"start": 268.0, "end": 281.0, "text": " And in this case, it is important that you apply the first law of papers which says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line."}, {"start": 281.0, "end": 283.0, "text": " So, there we go."}, {"start": 283.0, "end": 290.0, "text": " I really wanted to show you this paper because unfortunately, if we don't talk about it, almost no one will see it."}, {"start": 290.0, "end": 301.0, "text": " And this is why two minute papers exist. And if you wish to discuss this paper, make sure to drop by on our Discord server. The link is available in the video description."}, {"start": 301.0, "end": 312.0, "text": " This video has been supported by weights and biases. Look at this. They have a great community forum that aims to make you the best machine learning engineer you can be."}, {"start": 312.0, "end": 322.0, "text": " You see, I always get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start."}, {"start": 322.0, "end": 330.0, "text": " And here it is. In this forum, you can share your projects, ask for advice, look for collaborators and more."}, {"start": 330.0, "end": 346.0, "text": " Make sure to visit wmb.me slash paper forum and say hi or just click the link in the video description. Thanks to weights and biases for their long standing support and for helping us make better videos for you."}, {"start": 346.0, "end": 361.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ReBeJcmIlnA
Virtual Reality Fluid Drawing Is Here! 🥛
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers 📝 The paper "Interactive Liquid Splash Modeling by User Sketches" is available here: https://web.cse.ohio-state.edu/~wang.3602/Yan-2020-ILS/Yan-2020-ILS.pdf https://web.cse.ohio-state.edu/~wang.3602/publications.html 📝My earlier work on fluid control is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/real_time_fluid_control_eg/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #vr
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to control the fate of liquids in virtual worlds. This footage is from one of my earlier papers where I attempted fluid control. This means that we are not only simulating the movement of a piece of fluid, but we wish to coerce it to flow into a prescribed shape. This was super challenging and I haven't really seen a satisfactory solution that I think artists could use in the industry yet. And now that we have this modern mural network-based algorithms, we are now able to solve problems that we never even dreamed of solving just a few years ago. For instance, they can already perform this kind of style transfer for smoke simulations, which is incredible. So, are you thinking what I am thinking? Can one of those maybe tackle fluid control tool? Well, that's a tough call. Just to showcase how difficult this problem is, if we wish to have any control over our fluid simulations, if we are a trained artist, we can sculpt the fluid directly ourselves. Of course, this requires a great deal of expertise and often hours of work. Can we do better? Well, yes. Kind of. We can use a particle system built into most modern 3D modeling programs with which we can try to guide these particles to a given direction. This took about 20 minutes and it still requires some artistic expertise. So, that's it then. No. Hold on to your papers and check this out. The preparation for this work takes place in virtual reality where we can make these sketches in 3D and look at that. The liquid magically takes the shape of our sketch. So, how long did this take? Well, not one hour and not even 20 minutes. It took one minute. One minute, now we're talking. And even better, we can embed this into a simulation and it will behave like a real piece of fluid should. So, what is all this good for? Well, my experience has been in computer graphics. Is that if we put a powerful tool like this into the hands of capable artists, they are going to create things that we never even thought of creating. With this, they can make a heart from wine or create a milky skirt, several variants even if we wish. Or a liquid butterfly. I am loving these solutions and don't forget all of these can be done embedded into a virtual world and simulated as a real liquid. Now, we talked about 3 solutions and how much time they take, but we didn't see what they look like. Clearly, it is hard to compare these mathematically, so this is going to be, of course, subjective. So, this took an hour. It looks very smooth and is perhaps the most beautiful of the 3 solutions. That is great. However, as a drawback, it does not look like a real world water splash. The particle system took 20 minutes and creates a more lifelike version of our letter, but the physics is still missing. This looks like a trail of particles, not like a physics system. And let's see the new method. This took only a minute and it finally looks like a real splash. Now, make no mistake, all 3 of these solutions can be excellent depending on our artistic vision. So, how does all this magic happen? What is the architecture of this neural network? Well, these behavior emerges not from one, but from the battle of 2 neural networks. The generator neural network creates new splashes and the discriminator finds out whether these splashes are real or fake. Over time, they challenge each other and they teach each other to do better. The technique also goes the extra mile beyond just sketching. Look, for instance, your brushstrokes can also describe velocities. With this, we can not only control the shape, but even the behavior of the fluid too. So, there we go. Finally, a learning based technique gives us a proper solution for fluid control. And here comes the best part. It is not only quicker than previous solutions, but it can also be used by anyone. No artistic expertise is required. What a time to be alive. Wates and biases provides tools to track your experiments in your deep learning projects. Using their system, you can create beautiful reports like this one to explain your findings to your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 4.64, "end": 9.68, "text": " Today we are going to control the fate of liquids in virtual worlds."}, {"start": 10.8, "end": 16.080000000000002, "text": " This footage is from one of my earlier papers where I attempted fluid control."}, {"start": 16.080000000000002, "end": 21.12, "text": " This means that we are not only simulating the movement of a piece of fluid,"}, {"start": 21.12, "end": 25.92, "text": " but we wish to coerce it to flow into a prescribed shape."}, {"start": 25.92, "end": 32.480000000000004, "text": " This was super challenging and I haven't really seen a satisfactory solution that I think artists"}, {"start": 32.480000000000004, "end": 38.32, "text": " could use in the industry yet. And now that we have this modern mural network-based"}, {"start": 38.32, "end": 45.84, "text": " algorithms, we are now able to solve problems that we never even dreamed of solving just a few years ago."}, {"start": 45.84, "end": 51.040000000000006, "text": " For instance, they can already perform this kind of style transfer for smoke simulations,"}, {"start": 51.04, "end": 59.28, "text": " which is incredible. So, are you thinking what I am thinking? Can one of those maybe tackle fluid"}, {"start": 59.28, "end": 65.6, "text": " control tool? Well, that's a tough call. Just to showcase how difficult this problem is,"}, {"start": 65.6, "end": 71.44, "text": " if we wish to have any control over our fluid simulations, if we are a trained artist,"}, {"start": 71.44, "end": 78.08, "text": " we can sculpt the fluid directly ourselves. Of course, this requires a great deal of expertise"}, {"start": 78.08, "end": 87.67999999999999, "text": " and often hours of work. Can we do better? Well, yes. Kind of. We can use a particle system built"}, {"start": 87.67999999999999, "end": 94.08, "text": " into most modern 3D modeling programs with which we can try to guide these particles to a given"}, {"start": 94.08, "end": 100.56, "text": " direction. This took about 20 minutes and it still requires some artistic expertise."}, {"start": 100.56, "end": 108.72, "text": " So, that's it then. No. Hold on to your papers and check this out. The preparation for this work"}, {"start": 108.72, "end": 117.04, "text": " takes place in virtual reality where we can make these sketches in 3D and look at that. The liquid"}, {"start": 117.04, "end": 124.56, "text": " magically takes the shape of our sketch. So, how long did this take? Well, not one hour"}, {"start": 124.56, "end": 133.04, "text": " and not even 20 minutes. It took one minute. One minute, now we're talking. And even better,"}, {"start": 133.04, "end": 138.4, "text": " we can embed this into a simulation and it will behave like a real piece of fluid should."}, {"start": 139.76, "end": 146.48000000000002, "text": " So, what is all this good for? Well, my experience has been in computer graphics. Is that if we"}, {"start": 146.48000000000002, "end": 152.72, "text": " put a powerful tool like this into the hands of capable artists, they are going to create things"}, {"start": 152.72, "end": 160.56, "text": " that we never even thought of creating. With this, they can make a heart from wine or create a"}, {"start": 160.56, "end": 169.28, "text": " milky skirt, several variants even if we wish. Or a liquid butterfly. I am loving these solutions"}, {"start": 169.28, "end": 176.48, "text": " and don't forget all of these can be done embedded into a virtual world and simulated as a real"}, {"start": 176.48, "end": 183.84, "text": " liquid. Now, we talked about 3 solutions and how much time they take, but we didn't see what they"}, {"start": 183.84, "end": 190.56, "text": " look like. Clearly, it is hard to compare these mathematically, so this is going to be, of course,"}, {"start": 190.56, "end": 198.23999999999998, "text": " subjective. So, this took an hour. It looks very smooth and is perhaps the most beautiful of the"}, {"start": 198.24, "end": 206.4, "text": " 3 solutions. That is great. However, as a drawback, it does not look like a real world water splash."}, {"start": 206.4, "end": 212.0, "text": " The particle system took 20 minutes and creates a more lifelike version of our letter,"}, {"start": 212.0, "end": 218.64000000000001, "text": " but the physics is still missing. This looks like a trail of particles, not like a physics system."}, {"start": 219.20000000000002, "end": 226.8, "text": " And let's see the new method. This took only a minute and it finally looks like a real splash."}, {"start": 226.8, "end": 233.44, "text": " Now, make no mistake, all 3 of these solutions can be excellent depending on our artistic vision."}, {"start": 233.44, "end": 239.60000000000002, "text": " So, how does all this magic happen? What is the architecture of this neural network?"}, {"start": 239.60000000000002, "end": 246.56, "text": " Well, these behavior emerges not from one, but from the battle of 2 neural networks."}, {"start": 246.56, "end": 253.92000000000002, "text": " The generator neural network creates new splashes and the discriminator finds out whether these"}, {"start": 253.92, "end": 262.32, "text": " splashes are real or fake. Over time, they challenge each other and they teach each other to do better."}, {"start": 262.32, "end": 268.47999999999996, "text": " The technique also goes the extra mile beyond just sketching. Look, for instance, your brushstrokes"}, {"start": 268.47999999999996, "end": 276.08, "text": " can also describe velocities. With this, we can not only control the shape, but even the behavior"}, {"start": 276.08, "end": 283.59999999999997, "text": " of the fluid too. So, there we go. Finally, a learning based technique gives us a proper solution"}, {"start": 283.59999999999997, "end": 290.4, "text": " for fluid control. And here comes the best part. It is not only quicker than previous solutions,"}, {"start": 290.4, "end": 298.08, "text": " but it can also be used by anyone. No artistic expertise is required. What a time to be alive."}, {"start": 298.08, "end": 303.36, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 303.36, "end": 308.96000000000004, "text": " Using their system, you can create beautiful reports like this one to explain your findings to"}, {"start": 308.96000000000004, "end": 315.76, "text": " your colleagues better. It is used by many prestigious labs, including OpenAI, Toyota Research,"}, {"start": 315.76, "end": 322.40000000000003, "text": " GitHub, and more. And the best part is that weights and biases is free for all individuals,"}, {"start": 322.40000000000003, "end": 330.0, "text": " academics, and open source projects. Make sure to visit them through wnb.com slash papers,"}, {"start": 330.0, "end": 335.28, "text": " or just click the link in the video description, and you can get a free demo today."}, {"start": 335.28, "end": 340.8, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 340.8, "end": 369.6, "text": " better videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dZ_5TPWGPQI
New AI: Photos Go In, Reality Comes Out! 🌁
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "ADOP: Approximate Differentiable One-Pixel Point Rendering" is available here: https://arxiv.org/abs/2110.06635 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jolnai-Fehir. Today, we are going to take a collection of photos like these, and magically create a video where we can fly through these photos. How is this even possible? Especially that the input is only a handful of photos. Well, we give it to a learning algorithm and ask it to synthesize a photorealistic video where we fly through the scene as we please. That sounds impossible. Especially that some information is given about the scene, but this is really not much. And everything in between these photos has to be synthesized here. Let's see how well this new method can perform that, but don't expect too much. And wow, it took a handful of photos and filled in the rest so well that we got a smooth and creamy video out of it. So the images certainly look good in isolation. Now let's compare it to the real-world images that we already have, but have hidden from the algorithm and hold on to your papers. Wow, this is breathtaking. It is not a great deal different, is it? Does this mean, yes, it means that it guesses what reality should look like almost perfectly. Now note that this mainly applies for looking at these images in isolation. As soon as we weave them together into a video and start flying through the scene, we will see some flickering artifacts, but that is to be expected. The AI has to create so much information from so little and tiny inaccuracies that appear in each image are different. And when played abruptly after each other, this introduces these artifacts. So which regions should we look at to find these flaws? Well, usually regions where we have very little information in our set of photos and a lot of variation when we move our head. But for instance, visibility around thin structures is still a challenge. But of course, you know the joke, how do you spot a two-minute paper's viewer? They are always looking behind thin fences. Shiny surfaces are a challenge too as they reflect their environment and change a lot as we move our head around. So how does it compare to previous methods? Well, it creates images that are sharper and more true to the real images. Look, what you see here is a very rare sight. Usually when we see a new technique like this emerge, it almost always does better on some data sets and worse on others. The comparisons are almost always a wash, but here not at all, not in the slightest. Look, here you see four previous techniques, four scenes, and three different ways of measuring the quality of the output images. And almost none of it matters because the new technique reliably outperforms all of them everywhere. Except here, in this one case, depending on how we measure how good a solution is. And even then, it's quite close. Absolutely amazing. Make sure to also have a look at the paper in the video description to see that it can also perform Filmic tone mapping, change the exposure of the output images, and more. So how did they pull this off? What hardware do we need to train such a neural network? Do we need the server warehouses of Google or OpenAI to make this happen? No, not at all. And here comes the best part. If you have been holding onto your paper so far, now squeeze that paper because all it takes is a consumer graphics card and 12 to 24 hours of training. Then after that, we can use the neural network for as long as we wish. So, recreating reality from a handful of photos with the neural network that some people today can train at home themselves, the pace of progress in AI research is absolutely amazing. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Class, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jolnai-Fehir."}, {"start": 4.8, "end": 12.24, "text": " Today, we are going to take a collection of photos like these, and magically create a video"}, {"start": 12.24, "end": 17.2, "text": " where we can fly through these photos. How is this even possible?"}, {"start": 17.2, "end": 21.28, "text": " Especially that the input is only a handful of photos."}, {"start": 22.0, "end": 28.16, "text": " Well, we give it to a learning algorithm and ask it to synthesize a photorealistic video"}, {"start": 28.16, "end": 33.28, "text": " where we fly through the scene as we please. That sounds impossible."}, {"start": 34.0, "end": 40.24, "text": " Especially that some information is given about the scene, but this is really not much."}, {"start": 40.8, "end": 45.28, "text": " And everything in between these photos has to be synthesized here."}, {"start": 45.92, "end": 51.04, "text": " Let's see how well this new method can perform that, but don't expect too much."}, {"start": 51.04, "end": 61.84, "text": " And wow, it took a handful of photos and filled in the rest so well that we got a smooth and creamy video out of it."}, {"start": 62.48, "end": 69.6, "text": " So the images certainly look good in isolation. Now let's compare it to the real-world images"}, {"start": 69.6, "end": 75.68, "text": " that we already have, but have hidden from the algorithm and hold on to your papers."}, {"start": 75.68, "end": 81.60000000000001, "text": " Wow, this is breathtaking. It is not a great deal different, is it?"}, {"start": 81.60000000000001, "end": 88.72000000000001, "text": " Does this mean, yes, it means that it guesses what reality should look like almost perfectly."}, {"start": 88.72000000000001, "end": 94.08000000000001, "text": " Now note that this mainly applies for looking at these images in isolation."}, {"start": 94.08000000000001, "end": 99.52000000000001, "text": " As soon as we weave them together into a video and start flying through the scene,"}, {"start": 99.52000000000001, "end": 103.92000000000002, "text": " we will see some flickering artifacts, but that is to be expected."}, {"start": 103.92, "end": 113.36, "text": " The AI has to create so much information from so little and tiny inaccuracies that appear in each image are different."}, {"start": 113.36, "end": 119.2, "text": " And when played abruptly after each other, this introduces these artifacts."}, {"start": 119.2, "end": 122.96000000000001, "text": " So which regions should we look at to find these flaws?"}, {"start": 122.96000000000001, "end": 131.92000000000002, "text": " Well, usually regions where we have very little information in our set of photos and a lot of variation when we move our head."}, {"start": 131.92, "end": 136.95999999999998, "text": " But for instance, visibility around thin structures is still a challenge."}, {"start": 136.95999999999998, "end": 142.16, "text": " But of course, you know the joke, how do you spot a two-minute paper's viewer?"}, {"start": 142.16, "end": 146.0, "text": " They are always looking behind thin fences."}, {"start": 146.0, "end": 154.16, "text": " Shiny surfaces are a challenge too as they reflect their environment and change a lot as we move our head around."}, {"start": 154.16, "end": 157.67999999999998, "text": " So how does it compare to previous methods?"}, {"start": 157.68, "end": 163.44, "text": " Well, it creates images that are sharper and more true to the real images."}, {"start": 163.44, "end": 167.92000000000002, "text": " Look, what you see here is a very rare sight."}, {"start": 167.92000000000002, "end": 177.12, "text": " Usually when we see a new technique like this emerge, it almost always does better on some data sets and worse on others."}, {"start": 177.12, "end": 184.08, "text": " The comparisons are almost always a wash, but here not at all, not in the slightest."}, {"start": 184.08, "end": 194.48000000000002, "text": " Look, here you see four previous techniques, four scenes, and three different ways of measuring the quality of the output images."}, {"start": 194.48000000000002, "end": 202.0, "text": " And almost none of it matters because the new technique reliably outperforms all of them everywhere."}, {"start": 202.0, "end": 208.32000000000002, "text": " Except here, in this one case, depending on how we measure how good a solution is."}, {"start": 208.32000000000002, "end": 211.20000000000002, "text": " And even then, it's quite close."}, {"start": 211.20000000000002, "end": 212.96, "text": " Absolutely amazing."}, {"start": 212.96, "end": 218.64000000000001, "text": " Make sure to also have a look at the paper in the video description to see that it can also perform"}, {"start": 218.64000000000001, "end": 224.24, "text": " Filmic tone mapping, change the exposure of the output images, and more."}, {"start": 224.24, "end": 226.48000000000002, "text": " So how did they pull this off?"}, {"start": 226.48000000000002, "end": 230.32, "text": " What hardware do we need to train such a neural network?"}, {"start": 230.32, "end": 236.08, "text": " Do we need the server warehouses of Google or OpenAI to make this happen?"}, {"start": 236.08, "end": 237.76000000000002, "text": " No, not at all."}, {"start": 237.76000000000002, "end": 240.16, "text": " And here comes the best part."}, {"start": 240.16, "end": 244.88, "text": " If you have been holding onto your paper so far, now squeeze that paper"}, {"start": 244.88, "end": 251.92, "text": " because all it takes is a consumer graphics card and 12 to 24 hours of training."}, {"start": 251.92, "end": 256.8, "text": " Then after that, we can use the neural network for as long as we wish."}, {"start": 256.8, "end": 263.52, "text": " So, recreating reality from a handful of photos with the neural network that some people today"}, {"start": 263.52, "end": 269.92, "text": " can train at home themselves, the pace of progress in AI research is absolutely amazing."}, {"start": 269.92, "end": 272.40000000000003, "text": " What a time to be alive!"}, {"start": 272.40000000000003, "end": 275.84000000000003, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 275.84000000000003, "end": 281.84000000000003, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 281.84000000000003, "end": 288.84000000000003, "text": " They recently launched Quadro RTX 6000, RTX 8000, and V100 instances."}, {"start": 288.84000000000003, "end": 296.24, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 296.24, "end": 301.52, "text": " Class, they are the only Cloud service with 48GB RTX 8000."}, {"start": 301.52, "end": 306.08, "text": " Join researchers at organizations like Apple, MIT, and Caltech"}, {"start": 306.08, "end": 309.84000000000003, "text": " in using Lambda Cloud instances, workstations, or servers."}, {"start": 309.84000000000003, "end": 316.64, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 316.64, "end": 322.12, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 322.12, "end": 326.28000000000003, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=WCAF3PNEc_c
Google's Enhance AI - Super Resolution Is Here! 🔍
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Image Super-Resolution via Iterative Refinement " is available here: https://iterative-refinement.github.io/ https://github.com/Janspiry/Image-Super-Resolution-via-Iterative-Refinement 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to grow people out of noise of all things. So, I hear you asking, what is going on here? Well, what this work performs is something that we call super-resolution. What is that? Simple, the enhanced thing. Have a look at this technique from last year. In Goals, a course image or video, and this AI-based method is tasked with this. Yes, this is not science fiction, this is super-resolution, which means that the AI synthesized crisp details onto the image. Now, fast forward a year later, and let's see what this new paper from scientists at Google Brain is capable of. First, a hallmark of a good technique is when we can give it a really coarse input, and it can still do something with it. In this case, this image will be 64x64 pixels, which is almost nothing, I'm afraid, and let's see how it fers. This will not be easy. And, well, the initial results are not good. But, don't put too much of a stake in the initial results, because this work iteratively refines this noise, which means that you should hold onto your papers and... Oh, yes, it means that it improves over time. It's getting there. Whoa, still going, and... Wow, I can hardly believe what has happened here. In each case, Ingo's a really coarse input image, where we get so little information, look, the eye color is often given by only a couple pixels, and we get a really crisp and believable output. What's more, it can even deal with glasses too. Now, of course, this is not the first paper on super-resolution, what's more, it is not even the hundredth paper performing super-resolution. So, comparing to previous works is vital here. We will compare this to previous methods in two different ways. One, of course, we are going to look. In previous regression-based methods perform reasonably well, however, if we take a closer look, we see that the images are a little blurry, high-frequency details are missing. And now, let's see if the new method can do any better. Well, this looks great, but we are fellow scholars here, we know that we can only evaluate this result in the presence of the true image. Now, let's see. Nice, we would have to zoom in real close to find out that the two images are not the same. Fantastic! Now, while we are looking at these very convincing, high-resolution outputs, please note that we are only really scratching the surface here. The heart and soul of a good super-resolution paper is proper evaluation and user studies, and the paper contains a ton more details on that. For instance, this part of the study shows how likely people were to confuse the synthesized images with real ones. Previous methods, especially Paul's, which is an amazing technique, reached about 33%, which means that most of the time, people found out the trick, but... Whoa! Look here, the new method is almost at the 50% mark. This is the very first time that I see a super-resolution technique where people can barely tell that these images are synthetic. We are getting one step closer to this technique getting deployed in real-world products. It could improve the quality of your Zoom meetings, video games, online images, and much, much more. Now, note that not even this one is perfect. Look, as we increase the resolution of the output of the image, the users are more likely to find out that these are synthetic images. But still, for now, this is an amazingly forward in just one paper. I can hardly believe that we can take this image and make it into this image using a learning-based method. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB, RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 5.0, "end": 11.0, "text": " Today we are going to grow people out of noise of all things."}, {"start": 11.0, "end": 15.0, "text": " So, I hear you asking, what is going on here?"}, {"start": 15.0, "end": 20.0, "text": " Well, what this work performs is something that we call super-resolution."}, {"start": 20.0, "end": 22.0, "text": " What is that?"}, {"start": 22.0, "end": 24.0, "text": " Simple, the enhanced thing."}, {"start": 24.0, "end": 27.0, "text": " Have a look at this technique from last year."}, {"start": 27.0, "end": 35.0, "text": " In Goals, a course image or video, and this AI-based method is tasked with this."}, {"start": 35.0, "end": 45.0, "text": " Yes, this is not science fiction, this is super-resolution, which means that the AI synthesized crisp details onto the image."}, {"start": 45.0, "end": 53.0, "text": " Now, fast forward a year later, and let's see what this new paper from scientists at Google Brain is capable of."}, {"start": 53.0, "end": 61.0, "text": " First, a hallmark of a good technique is when we can give it a really coarse input, and it can still do something with it."}, {"start": 61.0, "end": 71.0, "text": " In this case, this image will be 64x64 pixels, which is almost nothing, I'm afraid, and let's see how it fers."}, {"start": 71.0, "end": 74.0, "text": " This will not be easy."}, {"start": 74.0, "end": 78.0, "text": " And, well, the initial results are not good."}, {"start": 78.0, "end": 89.0, "text": " But, don't put too much of a stake in the initial results, because this work iteratively refines this noise, which means that you should hold onto your papers and..."}, {"start": 89.0, "end": 93.0, "text": " Oh, yes, it means that it improves over time."}, {"start": 93.0, "end": 95.0, "text": " It's getting there."}, {"start": 95.0, "end": 99.0, "text": " Whoa, still going, and..."}, {"start": 99.0, "end": 103.0, "text": " Wow, I can hardly believe what has happened here."}, {"start": 103.0, "end": 119.0, "text": " In each case, Ingo's a really coarse input image, where we get so little information, look, the eye color is often given by only a couple pixels, and we get a really crisp and believable output."}, {"start": 119.0, "end": 123.0, "text": " What's more, it can even deal with glasses too."}, {"start": 123.0, "end": 133.0, "text": " Now, of course, this is not the first paper on super-resolution, what's more, it is not even the hundredth paper performing super-resolution."}, {"start": 133.0, "end": 136.0, "text": " So, comparing to previous works is vital here."}, {"start": 136.0, "end": 141.0, "text": " We will compare this to previous methods in two different ways."}, {"start": 141.0, "end": 144.0, "text": " One, of course, we are going to look."}, {"start": 144.0, "end": 157.0, "text": " In previous regression-based methods perform reasonably well, however, if we take a closer look, we see that the images are a little blurry, high-frequency details are missing."}, {"start": 157.0, "end": 161.0, "text": " And now, let's see if the new method can do any better."}, {"start": 161.0, "end": 171.0, "text": " Well, this looks great, but we are fellow scholars here, we know that we can only evaluate this result in the presence of the true image."}, {"start": 171.0, "end": 173.0, "text": " Now, let's see."}, {"start": 173.0, "end": 180.0, "text": " Nice, we would have to zoom in real close to find out that the two images are not the same."}, {"start": 180.0, "end": 181.0, "text": " Fantastic!"}, {"start": 181.0, "end": 190.0, "text": " Now, while we are looking at these very convincing, high-resolution outputs, please note that we are only really scratching the surface here."}, {"start": 190.0, "end": 200.0, "text": " The heart and soul of a good super-resolution paper is proper evaluation and user studies, and the paper contains a ton more details on that."}, {"start": 200.0, "end": 208.0, "text": " For instance, this part of the study shows how likely people were to confuse the synthesized images with real ones."}, {"start": 208.0, "end": 220.0, "text": " Previous methods, especially Paul's, which is an amazing technique, reached about 33%, which means that most of the time, people found out the trick, but..."}, {"start": 220.0, "end": 236.0, "text": " Whoa! Look here, the new method is almost at the 50% mark. This is the very first time that I see a super-resolution technique where people can barely tell that these images are synthetic."}, {"start": 236.0, "end": 242.0, "text": " We are getting one step closer to this technique getting deployed in real-world products."}, {"start": 242.0, "end": 249.0, "text": " It could improve the quality of your Zoom meetings, video games, online images, and much, much more."}, {"start": 249.0, "end": 262.0, "text": " Now, note that not even this one is perfect. Look, as we increase the resolution of the output of the image, the users are more likely to find out that these are synthetic images."}, {"start": 262.0, "end": 268.0, "text": " But still, for now, this is an amazingly forward in just one paper."}, {"start": 268.0, "end": 275.0, "text": " I can hardly believe that we can take this image and make it into this image using a learning-based method."}, {"start": 275.0, "end": 280.0, "text": " What a time to be alive! This episode has been supported by Lambda GPU Cloud."}, {"start": 280.0, "end": 286.0, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 286.0, "end": 300.0, "text": " They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 300.0, "end": 306.0, "text": " Plus, they are the only Cloud service with 48GB, RTX 8000."}, {"start": 306.0, "end": 314.0, "text": " Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers."}, {"start": 314.0, "end": 321.0, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 321.0, "end": 327.0, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 327.0, "end": 331.0, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=Y6ezNI0Idsc
Watch This Statue Grow Out Of Nothing! 🗽
❤️ Check out Cohere and sign up for free today: https://cohere.ai/papers 📝 The paper "Large Steps in Inverse Rendering of Geometry" is available here: https://rgl.epfl.ch/publications/Nicolet2021Large The code is available here: https://github.com/rgl-epfl/large-steps-pytorch 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Carlos Zonai-Fehir. Today, we are going to learn how to take any shape in the real world, make a digital copy of it, and place it into our virtual world. These can be done through something that we call differentiable rendering. What is that? Well, simple, we take a photograph, and find a photorealistic material model that we can put in our light simulation program that matches it. What this means is that now we can essentially put this real material into a virtual world. This work did very well with materials, but it did not capture the geometry. This other work is from Vensai Jacob's group, and it jointly found the geometry and material properties. Seeing these images gradually morph into the right solution is an absolutely beautiful site, but as you see high frequency details were not as good. You see here these details are gone. So, where does this put us? If we need only the materials we can do really well, but then no geometry. If we wish to get the geometry and the materials, we can use this, but we lose a lot of detail. There seems to be no way out. Is there a solution for this? Well, Vensai Jacob's amazing light transport simulation group is back with GANS blazing. Let's see what they were up to. First, let's try to reproduce a 3D geometry from a bunch of triangles. Let's see and... Ouch! Not good, right? So, what is the problem here? The problem is that we have tons of self-intersections leading to a piece of tangled mesh geometry. This is incorrect. Not what we are looking for. Now, let's try to improve this by applying a step that we call regularization. This guides the potential solutions towards smoother results. Sounds good? Hoping that this will do the trick. Let's have a look together. So, what do you think? Better, but there are some details that are lost and my main issue is that the whole geometry is inflectuation. Is that a problem? Yes, it is. Why? Because this jumpy behavior means that it has many competing solutions that it can't choose from. Essentially, the algorithm says maybe this, or not, perhaps this instead? No, not this. How about this? And it just keeps going on forever. It doesn't really know what makes a good reconstruction. Now, hold on to your papers and let's see how the new method does with these examples. Oh, my! This converges somewhere, which means that at the end of this process, it settles on something. Finally, this one knows what good is. Fantastic. But now, these geometries were not that difficult. Let's give it a real challenge. Here comes the dragon. Can it deal with that? Just look at how beautifully it grows out of this block. Yes, but look, the details are not quite there. So, are we done? That's it. No, not even close. Do not despair for a second. This new paper also proposes an isotropic remashing step, which does this. On the count of 1, 2, 3. Boom! We are done. Whoa! So good! And get this. The solution is not restricted to only geometry. It also works on textures. Now, make no mistake. This feature is not super useful in its current state, but it may pave the way to even more sophisticated methods to more papers down the line. So, importing real geometry into our virtual worlds? Yes, please. This episode has been supported by CoHear AI. CoHear builds large language models and makes them available through an API so businesses can add advanced language understanding to their system or app quickly with just one line of code. You can use your own data, whether it's text from customer service requests, legal contracts, or social media posts to create your own custom models to understand text, or even generated. For instance, it can be used to automatically determine whether your messages are about your business hours, returns, or shipping, or it can be used to generate a list of possible sentences you can use for your product descriptions. Make sure to go to CoHear.AI slash papers or click the link in the video description and give it a try today. It's super easy to use. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Carlos Zonai-Fehir."}, {"start": 4.64, "end": 9.28, "text": " Today, we are going to learn how to take any shape in the real world,"}, {"start": 10.0, "end": 14.56, "text": " make a digital copy of it, and place it into our virtual world."}, {"start": 15.200000000000001, "end": 19.6, "text": " These can be done through something that we call differentiable rendering."}, {"start": 20.400000000000002, "end": 28.16, "text": " What is that? Well, simple, we take a photograph, and find a photorealistic material model"}, {"start": 28.16, "end": 32.16, "text": " that we can put in our light simulation program that matches it."}, {"start": 32.16, "end": 39.6, "text": " What this means is that now we can essentially put this real material into a virtual world."}, {"start": 39.6, "end": 45.2, "text": " This work did very well with materials, but it did not capture the geometry."}, {"start": 46.08, "end": 53.6, "text": " This other work is from Vensai Jacob's group, and it jointly found the geometry and material"}, {"start": 53.6, "end": 60.32, "text": " properties. Seeing these images gradually morph into the right solution is an absolutely"}, {"start": 60.32, "end": 68.32000000000001, "text": " beautiful site, but as you see high frequency details were not as good. You see here these details"}, {"start": 68.32000000000001, "end": 76.32, "text": " are gone. So, where does this put us? If we need only the materials we can do really well,"}, {"start": 76.32, "end": 84.16, "text": " but then no geometry. If we wish to get the geometry and the materials, we can use this,"}, {"start": 84.16, "end": 90.96, "text": " but we lose a lot of detail. There seems to be no way out. Is there a solution for this?"}, {"start": 91.52, "end": 98.08, "text": " Well, Vensai Jacob's amazing light transport simulation group is back with GANS blazing."}, {"start": 98.08, "end": 105.19999999999999, "text": " Let's see what they were up to. First, let's try to reproduce a 3D geometry from a bunch of triangles."}, {"start": 105.2, "end": 115.60000000000001, "text": " Let's see and... Ouch! Not good, right? So, what is the problem here? The problem is that we have"}, {"start": 115.60000000000001, "end": 123.04, "text": " tons of self-intersections leading to a piece of tangled mesh geometry. This is incorrect."}, {"start": 123.04, "end": 129.28, "text": " Not what we are looking for. Now, let's try to improve this by applying a step that we call"}, {"start": 129.28, "end": 136.72, "text": " regularization. This guides the potential solutions towards smoother results. Sounds good?"}, {"start": 136.72, "end": 143.28, "text": " Hoping that this will do the trick. Let's have a look together. So, what do you think?"}, {"start": 143.28, "end": 151.92000000000002, "text": " Better, but there are some details that are lost and my main issue is that the whole geometry is"}, {"start": 151.92, "end": 160.72, "text": " inflectuation. Is that a problem? Yes, it is. Why? Because this jumpy behavior means that it has"}, {"start": 160.72, "end": 167.92, "text": " many competing solutions that it can't choose from. Essentially, the algorithm says maybe this,"}, {"start": 167.92, "end": 176.88, "text": " or not, perhaps this instead? No, not this. How about this? And it just keeps going on forever."}, {"start": 176.88, "end": 183.6, "text": " It doesn't really know what makes a good reconstruction. Now, hold on to your papers and let's see"}, {"start": 183.6, "end": 192.32, "text": " how the new method does with these examples. Oh, my! This converges somewhere, which means that at"}, {"start": 192.32, "end": 200.16, "text": " the end of this process, it settles on something. Finally, this one knows what good is. Fantastic."}, {"start": 200.16, "end": 207.92, "text": " But now, these geometries were not that difficult. Let's give it a real challenge. Here comes the dragon."}, {"start": 207.92, "end": 216.88, "text": " Can it deal with that? Just look at how beautifully it grows out of this block. Yes, but look,"}, {"start": 216.88, "end": 226.24, "text": " the details are not quite there. So, are we done? That's it. No, not even close. Do not despair for"}, {"start": 226.24, "end": 234.56, "text": " a second. This new paper also proposes an isotropic remashing step, which does this. On the count of 1,"}, {"start": 235.36, "end": 247.44, "text": " 2, 3. Boom! We are done. Whoa! So good! And get this. The solution is not restricted to only geometry."}, {"start": 247.44, "end": 255.12, "text": " It also works on textures. Now, make no mistake. This feature is not super useful in its current state,"}, {"start": 255.12, "end": 260.56, "text": " but it may pave the way to even more sophisticated methods to more papers down the line."}, {"start": 261.28000000000003, "end": 268.16, "text": " So, importing real geometry into our virtual worlds? Yes, please. This episode has been"}, {"start": 268.16, "end": 274.88, "text": " supported by CoHear AI. CoHear builds large language models and makes them available through an"}, {"start": 274.88, "end": 282.32, "text": " API so businesses can add advanced language understanding to their system or app quickly with"}, {"start": 282.32, "end": 289.04, "text": " just one line of code. You can use your own data, whether it's text from customer service requests,"}, {"start": 289.04, "end": 295.76, "text": " legal contracts, or social media posts to create your own custom models to understand text,"}, {"start": 295.76, "end": 303.12, "text": " or even generated. For instance, it can be used to automatically determine whether your messages"}, {"start": 303.12, "end": 310.64, "text": " are about your business hours, returns, or shipping, or it can be used to generate a list of"}, {"start": 310.64, "end": 316.96, "text": " possible sentences you can use for your product descriptions. Make sure to go to CoHear.AI"}, {"start": 316.96, "end": 323.52, "text": " slash papers or click the link in the video description and give it a try today. It's super easy"}, {"start": 323.52, "end": 342.96, "text": " to use. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=QR5MFQnZM3k
NVIDIA’s Stretchy Simulation: Super Quick! 🐘
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 The paper "A Constraint-based Formulation of Stable Neo-Hookean Materials" is available here: - Paper: https://mmacklin.com/neohookean.pdf - Online demo: https://matthias-research.github.io/pages/challenges/softBody.html 📝 The paper "Gaussian Material Synthesis" (with the pseudocode) is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia #gamedev
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Ifehir. Today, we are going to have a look at beautiful simulations from a quick paper. But wait, how can a research paper be quick? Well, it is quick for two key reasons. Reason number one, look at this complex soft body simulation. This is not a jumpsuit, this showcases the geometry of the outer tissue of this elephant and is made of 80,000 elements. And now, hold onto your papers away with the geometry and feast your eyes upon this beautiful simulation. My goodness, tons of stretching, moving and deformation. Wow! So, how long do we have to wait for a result like this? All nighters, right? Well, about that quick part I just mentioned, it runs very, very quickly. Eight milliseconds per frame. Yes, that means that it runs easily in real time on a modern graphics card. And this work has some other aspect that is also quick, which we will discuss in a moment. But first, let's see some of the additional advantages it has compared to previous methods. For instance, if you think this was a stretchy simulation, no no, this is a stretchy simulation. Look, this is a dragon. Well, it doesn't look like a dragon, does it? Why is that? Well, it has been compressed and scrambled into a tiny plane. But if we let go of the forces, ah, there it is. It was able to regain its original shape. And the algorithm can withstand even this sort of torture test, which is absolutely amazing. One more key advantage is the lack of volume dissipation. Yes, believe it or not, many previous simulation methods struggle with things disappearing over time. Don't believe it. Let me show you this experiment with GUI dragons and balls. When using a traditional technique, whoa, this guy is gone. So, let's see what a previous method would do in this case. We start out with this block and after a fair bit of stretching, wait a second. Are you trying to tell me that this has the same amount of volume as this? No sir, this is volume dissipation at its finest. So, can the new method be so quick and still retain the entirety of the volume? Yes sir, loving it. Let's see another example of volume preservation. Okay, I am loving this. These transformations are not reasonable. This is indeed a very challenging test. Can it withstand all this? Keep your eyes on the volume of the cubes, which change a little. It's not perfect, but considering the crazy things we are doing to it, this is very respectable. And in the end, look, when we let go, we get most of it back. I'll freeze the appropriate frame for you. So, second quick aspect, what is it? Yes, it is quick to run, but that's not all. It is also quick to implement. For reference, if you wish to implement one of our earlier papers on material synthesis, this is the number of variables you have to remember. And this is the pseudo code for the algorithm itself. What is that? Well, this shows what steps we need to take to implement our technique in a computer program. I don't consider this to be too complex, but now compare it to these simulation algorithms pseudo code. Whoa! Much simpler. I would wager that if everything goes well, a competent computer graphics research scientist could implement this in a day. And that is a rare sight for a modern simulation algorithm, and that is excellent. I think you are going to hear from this technique a great deal more. Who wrote it? Of course, Miles McLean and Matias Müller, two excellent research scientists at Nvidia. Congratulations! And with this kind of progress, just imagine what we will be able to do two more papers down the line. What a time to be alive! This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper-parameter optimization. No wonder this is the experiment tracking tool choice of OpenAI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.ME-slash-paper-intro, or just click the link in the video description, and try this 10-minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Ifehir."}, {"start": 5.0, "end": 11.0, "text": " Today, we are going to have a look at beautiful simulations from a quick paper."}, {"start": 11.0, "end": 15.0, "text": " But wait, how can a research paper be quick?"}, {"start": 15.0, "end": 18.0, "text": " Well, it is quick for two key reasons."}, {"start": 18.0, "end": 23.0, "text": " Reason number one, look at this complex soft body simulation."}, {"start": 23.0, "end": 33.0, "text": " This is not a jumpsuit, this showcases the geometry of the outer tissue of this elephant and is made of 80,000 elements."}, {"start": 33.0, "end": 42.0, "text": " And now, hold onto your papers away with the geometry and feast your eyes upon this beautiful simulation."}, {"start": 42.0, "end": 48.0, "text": " My goodness, tons of stretching, moving and deformation."}, {"start": 48.0, "end": 49.0, "text": " Wow!"}, {"start": 49.0, "end": 54.0, "text": " So, how long do we have to wait for a result like this?"}, {"start": 54.0, "end": 56.0, "text": " All nighters, right?"}, {"start": 56.0, "end": 62.0, "text": " Well, about that quick part I just mentioned, it runs very, very quickly."}, {"start": 62.0, "end": 65.0, "text": " Eight milliseconds per frame."}, {"start": 65.0, "end": 71.0, "text": " Yes, that means that it runs easily in real time on a modern graphics card."}, {"start": 71.0, "end": 78.0, "text": " And this work has some other aspect that is also quick, which we will discuss in a moment."}, {"start": 78.0, "end": 85.0, "text": " But first, let's see some of the additional advantages it has compared to previous methods."}, {"start": 85.0, "end": 93.0, "text": " For instance, if you think this was a stretchy simulation, no no, this is a stretchy simulation."}, {"start": 93.0, "end": 96.0, "text": " Look, this is a dragon."}, {"start": 96.0, "end": 99.0, "text": " Well, it doesn't look like a dragon, does it?"}, {"start": 99.0, "end": 100.0, "text": " Why is that?"}, {"start": 100.0, "end": 105.0, "text": " Well, it has been compressed and scrambled into a tiny plane."}, {"start": 105.0, "end": 111.0, "text": " But if we let go of the forces, ah, there it is."}, {"start": 111.0, "end": 114.0, "text": " It was able to regain its original shape."}, {"start": 114.0, "end": 121.0, "text": " And the algorithm can withstand even this sort of torture test, which is absolutely amazing."}, {"start": 121.0, "end": 125.0, "text": " One more key advantage is the lack of volume dissipation."}, {"start": 125.0, "end": 133.0, "text": " Yes, believe it or not, many previous simulation methods struggle with things disappearing over time."}, {"start": 133.0, "end": 135.0, "text": " Don't believe it."}, {"start": 135.0, "end": 140.0, "text": " Let me show you this experiment with GUI dragons and balls."}, {"start": 140.0, "end": 146.0, "text": " When using a traditional technique, whoa, this guy is gone."}, {"start": 146.0, "end": 150.0, "text": " So, let's see what a previous method would do in this case."}, {"start": 150.0, "end": 157.0, "text": " We start out with this block and after a fair bit of stretching, wait a second."}, {"start": 157.0, "end": 163.0, "text": " Are you trying to tell me that this has the same amount of volume as this?"}, {"start": 163.0, "end": 168.0, "text": " No sir, this is volume dissipation at its finest."}, {"start": 168.0, "end": 177.0, "text": " So, can the new method be so quick and still retain the entirety of the volume?"}, {"start": 177.0, "end": 179.0, "text": " Yes sir, loving it."}, {"start": 179.0, "end": 183.0, "text": " Let's see another example of volume preservation."}, {"start": 183.0, "end": 188.0, "text": " Okay, I am loving this. These transformations are not reasonable."}, {"start": 188.0, "end": 191.0, "text": " This is indeed a very challenging test."}, {"start": 191.0, "end": 194.0, "text": " Can it withstand all this?"}, {"start": 194.0, "end": 199.0, "text": " Keep your eyes on the volume of the cubes, which change a little."}, {"start": 199.0, "end": 206.0, "text": " It's not perfect, but considering the crazy things we are doing to it, this is very respectable."}, {"start": 206.0, "end": 212.0, "text": " And in the end, look, when we let go, we get most of it back."}, {"start": 212.0, "end": 215.0, "text": " I'll freeze the appropriate frame for you."}, {"start": 215.0, "end": 219.0, "text": " So, second quick aspect, what is it?"}, {"start": 219.0, "end": 222.0, "text": " Yes, it is quick to run, but that's not all."}, {"start": 222.0, "end": 225.0, "text": " It is also quick to implement."}, {"start": 225.0, "end": 230.0, "text": " For reference, if you wish to implement one of our earlier papers on material synthesis,"}, {"start": 230.0, "end": 236.0, "text": " this is the number of variables you have to remember."}, {"start": 236.0, "end": 239.0, "text": " And this is the pseudo code for the algorithm itself."}, {"start": 239.0, "end": 247.0, "text": " What is that? Well, this shows what steps we need to take to implement our technique in a computer program."}, {"start": 247.0, "end": 254.0, "text": " I don't consider this to be too complex, but now compare it to these simulation algorithms pseudo code."}, {"start": 254.0, "end": 257.0, "text": " Whoa! Much simpler."}, {"start": 257.0, "end": 265.0, "text": " I would wager that if everything goes well, a competent computer graphics research scientist could implement this in a day."}, {"start": 265.0, "end": 271.0, "text": " And that is a rare sight for a modern simulation algorithm, and that is excellent."}, {"start": 271.0, "end": 275.0, "text": " I think you are going to hear from this technique a great deal more."}, {"start": 275.0, "end": 283.0, "text": " Who wrote it? Of course, Miles McLean and Matias M\u00fcller, two excellent research scientists at Nvidia."}, {"start": 283.0, "end": 292.0, "text": " Congratulations! And with this kind of progress, just imagine what we will be able to do two more papers down the line."}, {"start": 292.0, "end": 298.0, "text": " What a time to be alive! This video has been supported by weights and biases."}, {"start": 298.0, "end": 305.0, "text": " Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data."}, {"start": 305.0, "end": 310.0, "text": " But I am not looking for data, I am looking for insights."}, {"start": 310.0, "end": 313.0, "text": " And weights and biases helps with exactly that."}, {"start": 313.0, "end": 321.0, "text": " They have tools for experiment tracking, data set and model versioning, and even hyper-parameter optimization."}, {"start": 321.0, "end": 330.0, "text": " No wonder this is the experiment tracking tool choice of OpenAI, Toyota Research, Samsung, and many more prestigious labs."}, {"start": 330.0, "end": 350.0, "text": " Make sure to use the link WNB.ME-slash-paper-intro, or just click the link in the video description, and try this 10-minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments."}, {"start": 350.0, "end": 357.0, "text": " After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=Mrdkyv0yXxY
Is Simulating Tiny Cloth Wrinkles Possible? 👕
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "GPU-based Simulation of Cloth Wrinkles at Submillimeter Levels" is available here: https://web.cse.ohio-state.edu/~wang.3602/publications.html https://dl.acm.org/doi/abs/10.1145/3450626.3459787 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today we are going to create beautiful, virtual clothes and marvel at how quickly we can simulate them. And the kicker is that these simulations are able to create details that are so tiny they are smaller than a millimeter. And when I first saw the title of this paper, I had two thoughts. First I asked, sub millimeter level, is this really necessary? Well, here is a dress rendered at the level of millimeters. And here is what it would look like with this new simulation technique. Hmm, so many crisp details suddenly appeared. Okay, you got me. I am now converted. So let's proceed to the second issue. Here I also said I will believe this kind of quality in a simulation when I see it. So how does this work? Well, we can give it a piece of course input geometry and this new technique synthesizes and simulates additional wrinkles on it. For instance, here we can add this vertical shirring patterns to it. And not only that, but we can still have collision detection so it can interact with other objects and suddenly the whole piece looks beautifully life like. And as you see here, the direction of these patterns can be chosen by us and it gets better because it can handle other kinds of patterns too. I am a light transport researcher by trade and I am very pleased by how beautifully these specular highlights are playing with the light. So good. Now let's see it in practice. On the left you see the course geometry input and on the right you see the magical new clothes this new method can create from them. And yes, this is it. Finally, I always wondered why virtual characters with simulated clothes always looked a little flat for many years now. These are the details that were missing. Just look at the difference this makes. Loving it. But how does this stack up against previous methods? Well, here is a previous technique from last year and here is the new one. Okay, how do we know which one is really better? Well, we do it in terms of mathematics and when looking at the relative errors from a reference simulation, the new one is more accurate. Great. But hold on to your papers because here comes the best part. Whoa! It is not only more accurate but it is blistering fast, easily 8 times faster than the previous method. So where does this put us in terms of total execution time? Add approximately one second per frame, often even less. What? One second per image for a crisp, sub millimeter level class simulation that is also quite accurate. Wow, the pace of progress in computer graphics research is absolutely amazing. And just imagine what we will be able to do two more papers down the line. This might run in real time easily. Now, drawbacks. Well, not really a drawback, but there are cases where the sub millimeter level simulation results materialize in a bit more crisp creases, but not much more. I really had to go to hunt for differences in this one. If you have spotted stark differences between the two here, please let me know in the comments. And here comes one more amazing thing. This paper was written by Huam in Wang and that's it. A single author paper that has been accepted to the SIGRAF conference, which is perhaps the most prestigious conference in computer graphics. That is a rare sight indeed. You see, sometimes even a single researcher can make all the difference. Huge congratulations. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000 RTX 8000 and V100 instances and hold on to your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 5.0, "end": 12.0, "text": " Today we are going to create beautiful, virtual clothes and marvel at how quickly we can simulate them."}, {"start": 12.0, "end": 21.0, "text": " And the kicker is that these simulations are able to create details that are so tiny they are smaller than a millimeter."}, {"start": 21.0, "end": 26.0, "text": " And when I first saw the title of this paper, I had two thoughts."}, {"start": 26.0, "end": 32.0, "text": " First I asked, sub millimeter level, is this really necessary?"}, {"start": 32.0, "end": 37.0, "text": " Well, here is a dress rendered at the level of millimeters."}, {"start": 37.0, "end": 42.0, "text": " And here is what it would look like with this new simulation technique."}, {"start": 42.0, "end": 47.0, "text": " Hmm, so many crisp details suddenly appeared."}, {"start": 47.0, "end": 51.0, "text": " Okay, you got me. I am now converted."}, {"start": 51.0, "end": 61.0, "text": " So let's proceed to the second issue. Here I also said I will believe this kind of quality in a simulation when I see it."}, {"start": 61.0, "end": 63.0, "text": " So how does this work?"}, {"start": 63.0, "end": 74.0, "text": " Well, we can give it a piece of course input geometry and this new technique synthesizes and simulates additional wrinkles on it."}, {"start": 74.0, "end": 91.0, "text": " For instance, here we can add this vertical shirring patterns to it. And not only that, but we can still have collision detection so it can interact with other objects and suddenly the whole piece looks beautifully life like."}, {"start": 91.0, "end": 102.0, "text": " And as you see here, the direction of these patterns can be chosen by us and it gets better because it can handle other kinds of patterns too."}, {"start": 102.0, "end": 111.0, "text": " I am a light transport researcher by trade and I am very pleased by how beautifully these specular highlights are playing with the light."}, {"start": 111.0, "end": 125.0, "text": " So good. Now let's see it in practice. On the left you see the course geometry input and on the right you see the magical new clothes this new method can create from them."}, {"start": 125.0, "end": 137.0, "text": " And yes, this is it. Finally, I always wondered why virtual characters with simulated clothes always looked a little flat for many years now."}, {"start": 137.0, "end": 142.0, "text": " These are the details that were missing. Just look at the difference this makes."}, {"start": 142.0, "end": 148.0, "text": " Loving it. But how does this stack up against previous methods?"}, {"start": 148.0, "end": 155.0, "text": " Well, here is a previous technique from last year and here is the new one."}, {"start": 155.0, "end": 170.0, "text": " Okay, how do we know which one is really better? Well, we do it in terms of mathematics and when looking at the relative errors from a reference simulation, the new one is more accurate. Great."}, {"start": 170.0, "end": 184.0, "text": " But hold on to your papers because here comes the best part. Whoa! It is not only more accurate but it is blistering fast, easily 8 times faster than the previous method."}, {"start": 184.0, "end": 195.0, "text": " So where does this put us in terms of total execution time? Add approximately one second per frame, often even less."}, {"start": 195.0, "end": 205.0, "text": " What? One second per image for a crisp, sub millimeter level class simulation that is also quite accurate."}, {"start": 205.0, "end": 216.0, "text": " Wow, the pace of progress in computer graphics research is absolutely amazing. And just imagine what we will be able to do two more papers down the line."}, {"start": 216.0, "end": 219.0, "text": " This might run in real time easily."}, {"start": 219.0, "end": 233.0, "text": " Now, drawbacks. Well, not really a drawback, but there are cases where the sub millimeter level simulation results materialize in a bit more crisp creases, but not much more."}, {"start": 233.0, "end": 242.0, "text": " I really had to go to hunt for differences in this one. If you have spotted stark differences between the two here, please let me know in the comments."}, {"start": 242.0, "end": 260.0, "text": " And here comes one more amazing thing. This paper was written by Huam in Wang and that's it. A single author paper that has been accepted to the SIGRAF conference, which is perhaps the most prestigious conference in computer graphics."}, {"start": 260.0, "end": 270.0, "text": " That is a rare sight indeed. You see, sometimes even a single researcher can make all the difference. Huge congratulations."}, {"start": 270.0, "end": 280.0, "text": " This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 280.0, "end": 294.0, "text": " They've recently launched Quadro RTX 6000 RTX 8000 and V100 instances and hold on to your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 294.0, "end": 308.0, "text": " Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers."}, {"start": 308.0, "end": 315.0, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 315.0, "end": 324.0, "text": " Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=M2QJ9iyGQ48
Watch This Virtual Dinosaur Fall Into A Cactus! 🦖🌵
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd  📝 The paper "Medial IPC: accelerated incremental potential contact with medial elastics" is available here: https://yangzzzy.github.io/PDF/medial_IPC_SIG21.pdf https://dl.acm.org/doi/10.1145/3450626.3459753 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Ifehir. Today we are going to do this... and this... and this on a budget. Today, through the power of computer graphics research works, we can simulate all these amazing elastic interactions. If we are very patient, that is, because they take forever to compute. But, if we wish to run these simulations quicker, what we can do is increase something that we call the time step size. Usually, this means that the simulation takes less time, but is also less accurate. Let's see this phenomenon through a previous method from just a year ago. Here, we set the time step size relatively small and drop an elastic barbarian ship onto these rods. This is a challenging scene because the ship is made out of half a million tiny elements and we have to simulate their interactions with the scene. How does this perform? Uh-oh. This isn't good. Did you see the issues? Issue number one is that the simulation is unstable. Look, things remain in motion when they shouldn't. And two, this is also troubling. Penetrations. Now, let's increase the time step size. What do we expect to happen? Well, now we advance the simulation in bigger chunks. So, we should expect to miss even more interactions in between these bigger steps. And, whoa! Sure enough, even more instability, even more penetration. So, what is the solution? Well, let's have a look at this new method and see if it can deal with this difficult scene. Now, hold on to your papers and... Wow! I am loving this. Issue number one, things coming to rest is solved. And issue number two, no penetrations. That is amazing. Now, what is so interesting here? Well, what you see here should not be possible at all because this new technique computes a reduced simulation instead. This is a simulation on a budget. And not only that, but let's increase the time step size a little. This means that we can advance the time in bigger chunks when computing the simulation at the cost of potentially missing important interactions between these steps. In short, expect a bad simulation now, like with the previous one. And... Wow! This is amazing. It still looks fine. But, we don't know that for sure, because we haven't seen the reference simulation yet. So, you know what's coming? Oh yes! Let's compare it to the reference simulation that takes forever to compute. This looks great. And... Let's see... This looks great too. They don't look the same, but if I were asked which one the reference is and which is the cheaper reduced simulation, I am not sure if I would be able to tell. Are you able to tell? Well, be careful with your answer because I have swapped the two. In reality, this is the reference. And this is the reduced simulation. Were you able to tell? Let me know in the comments below. And that is exactly the point. All this means that we got away with only computing the simulation in bigger steps. So, why is that good? Well, of course, because we got through it quicker. OK, but how much quicker? 110 times quicker. What? The two are close to equivalent, but this is more than 100 times quicker. Sign me up right away. Note that this is still not real time, but we are firmly in the second's per frame domain. So, we don't need an all-nighter for such a simulation. Just a coffee break. Now, note that this particular scene is really suited for the new technique. Other scenes aren't typically 100 times faster, but worst-case scenario is when we throw around a bunch of fur balls, but even that is at least 10 to 15 times faster. What does that mean? Well, an all-nighter simulation can be done, maybe not during a coffee break, but during a quick little nap. Yes, we can rest like this tiny dinosaur for a while, and by the time we wake up, the simulation is done, and we can count on it being close to the real deal. So good. Just make sure to keep the friction high while resting here, or otherwise, this happens. So, from now on, we get better simulations up to 100 times faster. What a time to be alive! This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent where they interview machine learning experts who discuss how they use learning-based algorithms to solve real-world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me-gd or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Ifehir."}, {"start": 4.76, "end": 7.4, "text": " Today we are going to do this..."}, {"start": 9.4, "end": 10.8, "text": " and this..."}, {"start": 12.48, "end": 15.4, "text": " and this on a budget."}, {"start": 15.76, "end": 19.400000000000002, "text": " Today, through the power of computer graphics research works,"}, {"start": 19.400000000000002, "end": 23.6, "text": " we can simulate all these amazing elastic interactions."}, {"start": 24.0, "end": 29.6, "text": " If we are very patient, that is, because they take forever to compute."}, {"start": 29.6, "end": 33.6, "text": " But, if we wish to run these simulations quicker,"}, {"start": 33.6, "end": 39.0, "text": " what we can do is increase something that we call the time step size."}, {"start": 39.0, "end": 43.0, "text": " Usually, this means that the simulation takes less time,"}, {"start": 43.0, "end": 45.8, "text": " but is also less accurate."}, {"start": 45.8, "end": 50.6, "text": " Let's see this phenomenon through a previous method from just a year ago."}, {"start": 50.6, "end": 54.400000000000006, "text": " Here, we set the time step size relatively small"}, {"start": 54.400000000000006, "end": 58.8, "text": " and drop an elastic barbarian ship onto these rods."}, {"start": 58.8, "end": 65.8, "text": " This is a challenging scene because the ship is made out of half a million tiny elements"}, {"start": 65.8, "end": 68.8, "text": " and we have to simulate their interactions with the scene."}, {"start": 68.8, "end": 70.8, "text": " How does this perform?"}, {"start": 70.8, "end": 71.8, "text": " Uh-oh."}, {"start": 71.8, "end": 73.8, "text": " This isn't good."}, {"start": 73.8, "end": 75.8, "text": " Did you see the issues?"}, {"start": 75.8, "end": 79.8, "text": " Issue number one is that the simulation is unstable."}, {"start": 79.8, "end": 83.8, "text": " Look, things remain in motion when they shouldn't."}, {"start": 83.8, "end": 89.8, "text": " And two, this is also troubling."}, {"start": 89.8, "end": 91.8, "text": " Penetrations."}, {"start": 92.8, "end": 95.8, "text": " Now, let's increase the time step size."}, {"start": 95.8, "end": 97.8, "text": " What do we expect to happen?"}, {"start": 97.8, "end": 101.8, "text": " Well, now we advance the simulation in bigger chunks."}, {"start": 101.8, "end": 105.8, "text": " So, we should expect to miss even more interactions"}, {"start": 105.8, "end": 108.8, "text": " in between these bigger steps."}, {"start": 108.8, "end": 110.8, "text": " And, whoa!"}, {"start": 110.8, "end": 115.8, "text": " Sure enough, even more instability, even more penetration."}, {"start": 115.8, "end": 117.8, "text": " So, what is the solution?"}, {"start": 117.8, "end": 123.8, "text": " Well, let's have a look at this new method and see if it can deal with this difficult scene."}, {"start": 123.8, "end": 126.8, "text": " Now, hold on to your papers and..."}, {"start": 126.8, "end": 128.8, "text": " Wow!"}, {"start": 128.8, "end": 130.8, "text": " I am loving this."}, {"start": 130.8, "end": 134.8, "text": " Issue number one, things coming to rest is solved."}, {"start": 134.8, "end": 138.8, "text": " And issue number two, no penetrations."}, {"start": 138.8, "end": 140.8, "text": " That is amazing."}, {"start": 140.8, "end": 142.8, "text": " Now, what is so interesting here?"}, {"start": 142.8, "end": 146.8, "text": " Well, what you see here should not be possible at all"}, {"start": 146.8, "end": 151.8, "text": " because this new technique computes a reduced simulation instead."}, {"start": 151.8, "end": 153.8, "text": " This is a simulation on a budget."}, {"start": 153.8, "end": 159.8, "text": " And not only that, but let's increase the time step size a little."}, {"start": 159.8, "end": 162.8, "text": " This means that we can advance the time in bigger chunks"}, {"start": 162.8, "end": 166.8, "text": " when computing the simulation at the cost of potentially"}, {"start": 166.8, "end": 170.8, "text": " missing important interactions between these steps."}, {"start": 170.8, "end": 173.8, "text": " In short, expect a bad simulation now,"}, {"start": 173.8, "end": 175.8, "text": " like with the previous one."}, {"start": 175.8, "end": 176.8, "text": " And..."}, {"start": 176.8, "end": 177.8, "text": " Wow!"}, {"start": 177.8, "end": 179.8, "text": " This is amazing."}, {"start": 179.8, "end": 181.8, "text": " It still looks fine."}, {"start": 181.8, "end": 183.8, "text": " But, we don't know that for sure,"}, {"start": 183.8, "end": 187.8, "text": " because we haven't seen the reference simulation yet."}, {"start": 187.8, "end": 189.8, "text": " So, you know what's coming?"}, {"start": 189.8, "end": 190.8, "text": " Oh yes!"}, {"start": 190.8, "end": 192.8, "text": " Let's compare it to the reference simulation"}, {"start": 192.8, "end": 195.8, "text": " that takes forever to compute."}, {"start": 195.8, "end": 197.8, "text": " This looks great."}, {"start": 197.8, "end": 198.8, "text": " And..."}, {"start": 198.8, "end": 200.8, "text": " Let's see..."}, {"start": 200.8, "end": 202.8, "text": " This looks great too."}, {"start": 202.8, "end": 204.8, "text": " They don't look the same,"}, {"start": 204.8, "end": 207.8, "text": " but if I were asked which one the reference is"}, {"start": 207.8, "end": 210.8, "text": " and which is the cheaper reduced simulation,"}, {"start": 210.8, "end": 213.8, "text": " I am not sure if I would be able to tell."}, {"start": 213.8, "end": 215.8, "text": " Are you able to tell?"}, {"start": 215.8, "end": 218.8, "text": " Well, be careful with your answer"}, {"start": 218.8, "end": 220.8, "text": " because I have swapped the two."}, {"start": 220.8, "end": 224.8, "text": " In reality, this is the reference."}, {"start": 224.8, "end": 227.8, "text": " And this is the reduced simulation."}, {"start": 227.8, "end": 229.8, "text": " Were you able to tell?"}, {"start": 229.8, "end": 231.8, "text": " Let me know in the comments below."}, {"start": 231.8, "end": 233.8, "text": " And that is exactly the point."}, {"start": 233.8, "end": 235.8, "text": " All this means that we got away"}, {"start": 235.8, "end": 239.8, "text": " with only computing the simulation in bigger steps."}, {"start": 239.8, "end": 241.8, "text": " So, why is that good?"}, {"start": 241.8, "end": 245.8, "text": " Well, of course, because we got through it quicker."}, {"start": 245.8, "end": 248.8, "text": " OK, but how much quicker?"}, {"start": 248.8, "end": 251.8, "text": " 110 times quicker."}, {"start": 251.8, "end": 253.8, "text": " What?"}, {"start": 253.8, "end": 256.8, "text": " The two are close to equivalent,"}, {"start": 256.8, "end": 259.8, "text": " but this is more than 100 times quicker."}, {"start": 259.8, "end": 261.8, "text": " Sign me up right away."}, {"start": 261.8, "end": 264.8, "text": " Note that this is still not real time,"}, {"start": 264.8, "end": 267.8, "text": " but we are firmly in the second's per frame domain."}, {"start": 267.8, "end": 271.8, "text": " So, we don't need an all-nighter for such a simulation."}, {"start": 271.8, "end": 273.8, "text": " Just a coffee break."}, {"start": 273.8, "end": 276.8, "text": " Now, note that this particular scene is really suited"}, {"start": 276.8, "end": 277.8, "text": " for the new technique."}, {"start": 277.8, "end": 280.8, "text": " Other scenes aren't typically 100 times faster,"}, {"start": 280.8, "end": 285.8, "text": " but worst-case scenario is when we throw around a bunch of fur balls,"}, {"start": 285.8, "end": 291.8, "text": " but even that is at least 10 to 15 times faster."}, {"start": 291.8, "end": 293.8, "text": " What does that mean?"}, {"start": 293.8, "end": 295.8, "text": " Well, an all-nighter simulation can be done,"}, {"start": 295.8, "end": 298.8, "text": " maybe not during a coffee break,"}, {"start": 298.8, "end": 301.8, "text": " but during a quick little nap."}, {"start": 301.8, "end": 305.8, "text": " Yes, we can rest like this tiny dinosaur for a while,"}, {"start": 305.8, "end": 309.8, "text": " and by the time we wake up, the simulation is done,"}, {"start": 309.8, "end": 313.8, "text": " and we can count on it being close to the real deal."}, {"start": 313.8, "end": 314.8, "text": " So good."}, {"start": 314.8, "end": 318.8, "text": " Just make sure to keep the friction high while resting here,"}, {"start": 318.8, "end": 321.8, "text": " or otherwise, this happens."}, {"start": 321.8, "end": 327.8, "text": " So, from now on, we get better simulations up to 100 times faster."}, {"start": 327.8, "end": 329.8, "text": " What a time to be alive!"}, {"start": 329.8, "end": 332.8, "text": " This video has been supported by weights and biases."}, {"start": 332.8, "end": 336.8, "text": " They have an amazing podcast by the name Gradient Descent"}, {"start": 336.8, "end": 339.8, "text": " where they interview machine learning experts"}, {"start": 339.8, "end": 342.8, "text": " who discuss how they use learning-based algorithms"}, {"start": 342.8, "end": 344.8, "text": " to solve real-world problems."}, {"start": 344.8, "end": 347.8, "text": " They've discussed biology, teaching robots,"}, {"start": 347.8, "end": 351.8, "text": " machine learning in outer space, and a whole lot more."}, {"start": 351.8, "end": 354.8, "text": " Perfect for a fellow scholar with an open mind."}, {"start": 354.8, "end": 359.8, "text": " Make sure to visit them through wnb.me-gd"}, {"start": 359.8, "end": 362.8, "text": " or just click the link in the video description."}, {"start": 362.8, "end": 365.8, "text": " Our thanks to weights and biases for their longstanding support"}, {"start": 365.8, "end": 368.8, "text": " and for helping us make better videos for you."}, {"start": 368.8, "end": 370.8, "text": " Thanks for watching and for your generous support,"}, {"start": 370.8, "end": 396.8, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=VMCYRCCqR5Q
This AI Learned Physics...But How Good Is It? ⚛
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "High-order Differentiable Autoencoder for Nonlinear Model Reduction" is available here: https://arxiv.org/abs/2102.11026 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Dominic Papers with Dr. Karo Zsolnai-Fehir. Today we are going to engage in the favorite pastimes of the computer graphics researcher, which is, well, this. And this. And believe it or not, all of this is simulated through a learning-based technique. Earlier, we marveled at a paper that showed that an AI can indeed learn to perform a fluid simulation. And just one more paper down the line, the simulations it was able to perform, extended to other fields like structural mechanics, incompressible fluid dynamics, and more. And even better, it could even simulate shapes and geometries that it had never seen before. So today, the question is not whether an AI can learn physics, the question is, how well can an AI learn physics? Let's try to answer that question by having a look at our first experiment. Here is a traditional handcrafted technique and a new neural network-based physics simulator. Both are doing fine, so nothing to see here. Whoa! What happened? Well, dear fellow scholars, this is when a simulation blows up. But the new one is still running, even when some traditional simulators blow up. That is excellent. But we don't have to bend over backwards to find other situations where the new technique is better than the previous ones. You see the reference simulation here, and it is all well and good that the new method does not blow up. But how accurate is it on this challenging scene? Let's have a look. The reference shows a large amount of bending where the head is roughly in line with the knees. Let's memorize that. Head in line with the knees. Got it. Now, let's see how the previous methods were able to deal with this challenging simulation. When simulating a system of a smaller size, well, none of these are too promising. When we crank up the simulation domain size, the physical model derivative, PMD in short, does pretty well. So, what about the new method? Both bend quite well. Not quite perfect. Remember, the head would have to go down to be almost in line with the knees. But, amazing progress nonetheless. This was a really challenging scene, and in other cases, the new method is able to match the reference simulator perfectly. So far, this sounds pretty good, but PMD seems to be a contender, and that Dear Fellow Scholars is a paper from 2005. From 16 years ago. So, why showcase this new work? Well, we have forgotten about one important thing. And here comes the key. The new simulation technique runs from 30 to almost 60 times faster than previous methods. How is that even possible? Well, this is a neural network-based technique. And training a neural network typically takes a long time, but we only need to do this once, and when we are done, querying the neural network typically can be done very quickly. Does this mean? Yes, yes it does. All this runs in real time for this dinosaur, bunny, and armadillo scenes, all of which are built from about 10,000 triangles. And we can play with them by using our mouse on our home computer. The cactus and herbal scenes require simulating, not tens, but hundreds of thousands of triangles. So, this took a bit longer as they are running between 1 and a half and 2 and a half frames per second. So, this is not only more accurate than previous techniques, not only more resilient than the previous techniques, but is also 30 to 60 times faster at the same time. Wow! And just think about the fact that just a year ago, an AI could only perform low-resolution fluid simulations, then a few months ago, more kinds of simulations, and then today, just one more paper down the line, simulations of this complexity. Just imagine what we will be able to do just two more papers down the line. What a time to be alive! Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilebs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Dominic Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 10.8, "text": " Today we are going to engage in the favorite pastimes of the computer graphics researcher,"}, {"start": 10.8, "end": 13.8, "text": " which is, well, this."}, {"start": 13.8, "end": 15.8, "text": " And this."}, {"start": 15.8, "end": 22.0, "text": " And believe it or not, all of this is simulated through a learning-based technique."}, {"start": 22.0, "end": 30.6, "text": " Earlier, we marveled at a paper that showed that an AI can indeed learn to perform a fluid simulation."}, {"start": 30.6, "end": 36.0, "text": " And just one more paper down the line, the simulations it was able to perform,"}, {"start": 36.0, "end": 43.6, "text": " extended to other fields like structural mechanics, incompressible fluid dynamics, and more."}, {"start": 43.6, "end": 51.400000000000006, "text": " And even better, it could even simulate shapes and geometries that it had never seen before."}, {"start": 51.4, "end": 60.8, "text": " So today, the question is not whether an AI can learn physics, the question is, how well can an AI learn physics?"}, {"start": 60.8, "end": 65.6, "text": " Let's try to answer that question by having a look at our first experiment."}, {"start": 65.6, "end": 72.6, "text": " Here is a traditional handcrafted technique and a new neural network-based physics simulator."}, {"start": 72.6, "end": 76.6, "text": " Both are doing fine, so nothing to see here."}, {"start": 76.6, "end": 78.2, "text": " Whoa!"}, {"start": 78.2, "end": 79.8, "text": " What happened?"}, {"start": 79.8, "end": 85.2, "text": " Well, dear fellow scholars, this is when a simulation blows up."}, {"start": 85.2, "end": 91.6, "text": " But the new one is still running, even when some traditional simulators blow up."}, {"start": 91.6, "end": 93.6, "text": " That is excellent."}, {"start": 93.6, "end": 101.4, "text": " But we don't have to bend over backwards to find other situations where the new technique is better than the previous ones."}, {"start": 101.4, "end": 108.2, "text": " You see the reference simulation here, and it is all well and good that the new method does not blow up."}, {"start": 108.2, "end": 112.60000000000001, "text": " But how accurate is it on this challenging scene?"}, {"start": 112.60000000000001, "end": 114.0, "text": " Let's have a look."}, {"start": 114.0, "end": 120.60000000000001, "text": " The reference shows a large amount of bending where the head is roughly in line with the knees."}, {"start": 120.60000000000001, "end": 122.2, "text": " Let's memorize that."}, {"start": 122.2, "end": 124.4, "text": " Head in line with the knees."}, {"start": 124.4, "end": 125.4, "text": " Got it."}, {"start": 125.4, "end": 131.2, "text": " Now, let's see how the previous methods were able to deal with this challenging simulation."}, {"start": 131.2, "end": 137.8, "text": " When simulating a system of a smaller size, well, none of these are too promising."}, {"start": 137.8, "end": 146.0, "text": " When we crank up the simulation domain size, the physical model derivative, PMD in short, does pretty well."}, {"start": 146.0, "end": 148.8, "text": " So, what about the new method?"}, {"start": 148.8, "end": 151.4, "text": " Both bend quite well."}, {"start": 151.4, "end": 152.8, "text": " Not quite perfect."}, {"start": 152.8, "end": 158.20000000000002, "text": " Remember, the head would have to go down to be almost in line with the knees."}, {"start": 158.20000000000002, "end": 161.20000000000002, "text": " But, amazing progress nonetheless."}, {"start": 161.2, "end": 170.39999999999998, "text": " This was a really challenging scene, and in other cases, the new method is able to match the reference simulator perfectly."}, {"start": 170.39999999999998, "end": 181.2, "text": " So far, this sounds pretty good, but PMD seems to be a contender, and that Dear Fellow Scholars is a paper from 2005."}, {"start": 181.2, "end": 183.6, "text": " From 16 years ago."}, {"start": 183.6, "end": 186.79999999999998, "text": " So, why showcase this new work?"}, {"start": 186.79999999999998, "end": 190.39999999999998, "text": " Well, we have forgotten about one important thing."}, {"start": 190.4, "end": 192.6, "text": " And here comes the key."}, {"start": 192.6, "end": 200.4, "text": " The new simulation technique runs from 30 to almost 60 times faster than previous methods."}, {"start": 200.4, "end": 202.8, "text": " How is that even possible?"}, {"start": 202.8, "end": 206.0, "text": " Well, this is a neural network-based technique."}, {"start": 206.0, "end": 213.0, "text": " And training a neural network typically takes a long time, but we only need to do this once,"}, {"start": 213.0, "end": 218.6, "text": " and when we are done, querying the neural network typically can be done very quickly."}, {"start": 218.6, "end": 220.2, "text": " Does this mean?"}, {"start": 220.2, "end": 222.4, "text": " Yes, yes it does."}, {"start": 222.4, "end": 228.79999999999998, "text": " All this runs in real time for this dinosaur, bunny, and armadillo scenes,"}, {"start": 228.79999999999998, "end": 232.6, "text": " all of which are built from about 10,000 triangles."}, {"start": 232.6, "end": 237.2, "text": " And we can play with them by using our mouse on our home computer."}, {"start": 237.2, "end": 245.0, "text": " The cactus and herbal scenes require simulating, not tens, but hundreds of thousands of triangles."}, {"start": 245.0, "end": 252.4, "text": " So, this took a bit longer as they are running between 1 and a half and 2 and a half frames per second."}, {"start": 252.4, "end": 260.0, "text": " So, this is not only more accurate than previous techniques, not only more resilient than the previous techniques,"}, {"start": 260.0, "end": 265.4, "text": " but is also 30 to 60 times faster at the same time."}, {"start": 265.4, "end": 266.6, "text": " Wow!"}, {"start": 266.6, "end": 275.0, "text": " And just think about the fact that just a year ago, an AI could only perform low-resolution fluid simulations,"}, {"start": 275.0, "end": 279.0, "text": " then a few months ago, more kinds of simulations,"}, {"start": 279.0, "end": 285.40000000000003, "text": " and then today, just one more paper down the line, simulations of this complexity."}, {"start": 285.40000000000003, "end": 290.8, "text": " Just imagine what we will be able to do just two more papers down the line."}, {"start": 290.8, "end": 300.0, "text": " What a time to be alive! Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible."}, {"start": 300.0, "end": 306.40000000000003, "text": " This gives you a faster way to build out models with more transparency into how your model is architected,"}, {"start": 306.40000000000003, "end": 309.40000000000003, "text": " how it performs, and how to debug it."}, {"start": 309.40000000000003, "end": 314.0, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 314.0, "end": 317.6, "text": " It even generates visualizations for all the model variables,"}, {"start": 317.6, "end": 323.8, "text": " and gives you recommendations both during modeling and training, and does all this automatically."}, {"start": 323.8, "end": 329.6, "text": " I only wish I had a tool like this when I was working on my neural networks during my PhD years."}, {"start": 329.6, "end": 336.40000000000003, "text": " Visit perceptilebs.com slash papers to easily install the free local version of their system today."}, {"start": 336.40000000000003, "end": 341.6, "text": " Our thanks to perceptilebs for their support, and for helping us make better videos for you."}, {"start": 341.6, "end": 347.6, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=U_VsRE0-SQE
Simulating 800,000 Metric Tons of Ice! 🤯
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "A glacier–ocean interaction model for tsunami genesis due to iceberg calving" is available here: https://www.nature.com/articles/s43247-021-00179-7 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-566722/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to see that some computer graphics simulation techniques are so accurate they can come out to the real world and even teach us something new. And it can do this too. And this too. Today, computer graphics research techniques are capable of conjuring up these beautiful virtual worlds where we can engage in the favorite pastime of the computer graphics researcher which is destroying them in a spectacular manner. Here you see Joshua Wolper's paper who is a returning guest in this series. He was a PhD student at the time and his first work we showcase was about breaking bread and visualizing all the damage that takes place during this process. His later work was about enriching our simulations with anisotropic damage and elasticity. So what does that mean exactly? This means that it supports more extreme topological changes in these virtual objects. And in the meantime he has graduated. So congratulations on all the amazing works Dr. Joshua Wolper. And let's see what he has been up to since he leveled up. Note that all of these previous works are about simulating, fracturing and damage. I wonder what else could all this knowledge be applied for? Hmm, how about simulating glacier fracture? Yes, really. But before we start, why would we do that? Because a technique like this could help assess and identify potential hazards ahead of time and get this maybe even mitigate them. Who knows maybe we could even go full-hurry seldom and predict potential hazards before they happen? Let's see how. To start out we need three things. First we need to simulate ice fracturing. Here is a related earlier work. However, this is on snow. Ice is different. That is going to be a challenge. Two, we need to simulate the ocean. And three, simulate how the two react to each other. Wow, that is going to be quite a challenge because capturing all of these really accurately requires multiple different algorithms. You may remember from this previous work how difficult it is to marry two simulation algorithms. Believe it or not, this is not one but two simulations, one inside the box and one outside. To make all this happen plenty of work had to be done in the transition zones. So this one for ice fractures might be even more challenging. And you may rest assured that we will not let this paper go until we see my favorite thing in all simulation research, which is of course comparing the simulation results to real world footage. For instance, the results would have to agree with this earlier lab experiment by Heller and colleagues that measures how the ocean reacts to a huge block of ice falling into it. Now, hold on for a second. We can't just say that it falls into the ocean. There are multiple kinds of falling into the ocean. For instance, it can either happen due to gravity or to buoyancy or capsizing. So we have two questions. Question number one, does this matter? Well, let's have a look. Oh yes, it does matter a great deal. The generated waves look quite different. Now, here comes the most exciting part. Question number two, do Dr. Wolper simulations agree with this real lab experiment? To start out, we wish to see three experiments. One for gravity, the color coding goes from colder to warmer colors as the velocity of the waves increases. We also have one simulation for buoyancy, and one for capsizing. We could say they look excellent, but we can't say that because we don't yet know how this experiment relates to the lab experiment yet. Before we compare the two, let's also add one more variable. Theory, we expect that the simulations match the theory nearly perfectly and to more or less match the lab experiment. Why only more or less, why not perfectly? Because it is hard to reproduce the exact forces, geometries and materials that were used in the experiment. Now, let's see, the solid lines follow the dash line very well. This means that the simulation follows the theory nearly perfectly. For the simulation, this plot is a little easier to read and shows that the lab experiment is within the error limits of the simulation. Now, at this point, yes, it is justified to say this is an excellent work. Now, let's ramp up the complexity of these simulations and hopefully give it a hard time. Look, now we're talking. Real icebergs, real calving. The paper also shows plots that compare this experiment to the theoretical results and found good agreements there too. Very good. Now, if it can deal with this, hold on to your papers and let's bring forth the final boss. Equip, Sermia. Well, what is that? This was a real glacier fracturing event in Greenland that involved 800,000 metric tons of ice. And at this point, I said, I am out. There are just too many variables, too many unknowns, too complex a situation to get meaningful results. 800,000 metric tons, you can't possibly reproduce this with a simulation. Well, if you have been holding on to your paper so far, now squeeze that paper and watch this. This is the reproduction and even better, we have measured data about wave amplitudes, average wave speed, iceberg sizes involved in this event, and get this. This simulation is able to reproduce all of these accurately. Wow! And we are still not done yet. It can also produce full 3D simulations, which requires the interplay of tens of millions of particles and can create beautiful footage like this. This not only looks beautiful, but it is useful, too. Look, we can even assemble a scene that reenacts what would happen if we were sitting in a boat nearby. Spoiler alert, it's not fun. So, there we go. Some of these computer graphics simulations are so accurate they can come out to the real world and even teach us new things. Reading papers makes me very, very happy and this was no exception. I had a fantastic time reading this paper. If you wish to have a great time, too, make sure to check it out in the video description. The SAP T-Labs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 11.040000000000001, "text": " Today we are going to see that some computer graphics simulation techniques are so accurate"}, {"start": 11.040000000000001, "end": 16.0, "text": " they can come out to the real world and even teach us something new."}, {"start": 16.0, "end": 19.0, "text": " And it can do this too."}, {"start": 19.0, "end": 21.0, "text": " And this too."}, {"start": 21.0, "end": 28.0, "text": " Today, computer graphics research techniques are capable of conjuring up these beautiful virtual worlds"}, {"start": 28.0, "end": 33.0, "text": " where we can engage in the favorite pastime of the computer graphics researcher"}, {"start": 33.0, "end": 37.0, "text": " which is destroying them in a spectacular manner."}, {"start": 37.0, "end": 43.0, "text": " Here you see Joshua Wolper's paper who is a returning guest in this series."}, {"start": 43.0, "end": 49.0, "text": " He was a PhD student at the time and his first work we showcase was about breaking bread"}, {"start": 49.0, "end": 54.0, "text": " and visualizing all the damage that takes place during this process."}, {"start": 54.0, "end": 62.0, "text": " His later work was about enriching our simulations with anisotropic damage and elasticity."}, {"start": 62.0, "end": 64.0, "text": " So what does that mean exactly?"}, {"start": 64.0, "end": 70.0, "text": " This means that it supports more extreme topological changes in these virtual objects."}, {"start": 70.0, "end": 73.0, "text": " And in the meantime he has graduated."}, {"start": 73.0, "end": 78.0, "text": " So congratulations on all the amazing works Dr. Joshua Wolper."}, {"start": 78.0, "end": 83.0, "text": " And let's see what he has been up to since he leveled up."}, {"start": 83.0, "end": 90.0, "text": " Note that all of these previous works are about simulating, fracturing and damage."}, {"start": 90.0, "end": 95.0, "text": " I wonder what else could all this knowledge be applied for?"}, {"start": 95.0, "end": 100.0, "text": " Hmm, how about simulating glacier fracture?"}, {"start": 100.0, "end": 101.0, "text": " Yes, really."}, {"start": 101.0, "end": 104.0, "text": " But before we start, why would we do that?"}, {"start": 104.0, "end": 112.0, "text": " Because a technique like this could help assess and identify potential hazards ahead of time"}, {"start": 112.0, "end": 116.0, "text": " and get this maybe even mitigate them."}, {"start": 116.0, "end": 124.0, "text": " Who knows maybe we could even go full-hurry seldom and predict potential hazards before they happen?"}, {"start": 124.0, "end": 125.0, "text": " Let's see how."}, {"start": 125.0, "end": 128.0, "text": " To start out we need three things."}, {"start": 128.0, "end": 131.0, "text": " First we need to simulate ice fracturing."}, {"start": 131.0, "end": 134.0, "text": " Here is a related earlier work."}, {"start": 134.0, "end": 136.0, "text": " However, this is on snow."}, {"start": 136.0, "end": 138.0, "text": " Ice is different."}, {"start": 138.0, "end": 140.0, "text": " That is going to be a challenge."}, {"start": 140.0, "end": 143.0, "text": " Two, we need to simulate the ocean."}, {"start": 143.0, "end": 148.0, "text": " And three, simulate how the two react to each other."}, {"start": 148.0, "end": 158.0, "text": " Wow, that is going to be quite a challenge because capturing all of these really accurately requires multiple different algorithms."}, {"start": 158.0, "end": 165.0, "text": " You may remember from this previous work how difficult it is to marry two simulation algorithms."}, {"start": 165.0, "end": 174.0, "text": " Believe it or not, this is not one but two simulations, one inside the box and one outside."}, {"start": 174.0, "end": 179.0, "text": " To make all this happen plenty of work had to be done in the transition zones."}, {"start": 179.0, "end": 184.0, "text": " So this one for ice fractures might be even more challenging."}, {"start": 184.0, "end": 192.0, "text": " And you may rest assured that we will not let this paper go until we see my favorite thing in all simulation research,"}, {"start": 192.0, "end": 197.0, "text": " which is of course comparing the simulation results to real world footage."}, {"start": 197.0, "end": 210.0, "text": " For instance, the results would have to agree with this earlier lab experiment by Heller and colleagues that measures how the ocean reacts to a huge block of ice falling into it."}, {"start": 210.0, "end": 213.0, "text": " Now, hold on for a second."}, {"start": 213.0, "end": 217.0, "text": " We can't just say that it falls into the ocean."}, {"start": 217.0, "end": 221.0, "text": " There are multiple kinds of falling into the ocean."}, {"start": 221.0, "end": 233.0, "text": " For instance, it can either happen due to gravity or to buoyancy or capsizing."}, {"start": 233.0, "end": 235.0, "text": " So we have two questions."}, {"start": 235.0, "end": 239.0, "text": " Question number one, does this matter?"}, {"start": 239.0, "end": 242.0, "text": " Well, let's have a look."}, {"start": 242.0, "end": 246.0, "text": " Oh yes, it does matter a great deal."}, {"start": 246.0, "end": 249.0, "text": " The generated waves look quite different."}, {"start": 249.0, "end": 252.0, "text": " Now, here comes the most exciting part."}, {"start": 252.0, "end": 259.0, "text": " Question number two, do Dr. Wolper simulations agree with this real lab experiment?"}, {"start": 259.0, "end": 262.0, "text": " To start out, we wish to see three experiments."}, {"start": 262.0, "end": 271.0, "text": " One for gravity, the color coding goes from colder to warmer colors as the velocity of the waves increases."}, {"start": 271.0, "end": 277.0, "text": " We also have one simulation for buoyancy,"}, {"start": 277.0, "end": 279.0, "text": " and one for capsizing."}, {"start": 279.0, "end": 288.0, "text": " We could say they look excellent, but we can't say that because we don't yet know how this experiment relates to the lab experiment yet."}, {"start": 288.0, "end": 293.0, "text": " Before we compare the two, let's also add one more variable."}, {"start": 293.0, "end": 303.0, "text": " Theory, we expect that the simulations match the theory nearly perfectly and to more or less match the lab experiment."}, {"start": 303.0, "end": 307.0, "text": " Why only more or less, why not perfectly?"}, {"start": 307.0, "end": 315.0, "text": " Because it is hard to reproduce the exact forces, geometries and materials that were used in the experiment."}, {"start": 315.0, "end": 321.0, "text": " Now, let's see, the solid lines follow the dash line very well."}, {"start": 321.0, "end": 325.0, "text": " This means that the simulation follows the theory nearly perfectly."}, {"start": 325.0, "end": 335.0, "text": " For the simulation, this plot is a little easier to read and shows that the lab experiment is within the error limits of the simulation."}, {"start": 335.0, "end": 341.0, "text": " Now, at this point, yes, it is justified to say this is an excellent work."}, {"start": 341.0, "end": 348.0, "text": " Now, let's ramp up the complexity of these simulations and hopefully give it a hard time."}, {"start": 348.0, "end": 353.0, "text": " Look, now we're talking. Real icebergs, real calving."}, {"start": 353.0, "end": 362.0, "text": " The paper also shows plots that compare this experiment to the theoretical results and found good agreements there too."}, {"start": 362.0, "end": 370.0, "text": " Very good. Now, if it can deal with this, hold on to your papers and let's bring forth the final boss."}, {"start": 370.0, "end": 372.0, "text": " Equip, Sermia."}, {"start": 372.0, "end": 375.0, "text": " Well, what is that?"}, {"start": 375.0, "end": 384.0, "text": " This was a real glacier fracturing event in Greenland that involved 800,000 metric tons of ice."}, {"start": 384.0, "end": 395.0, "text": " And at this point, I said, I am out. There are just too many variables, too many unknowns, too complex a situation to get meaningful results."}, {"start": 395.0, "end": 402.0, "text": " 800,000 metric tons, you can't possibly reproduce this with a simulation."}, {"start": 402.0, "end": 409.0, "text": " Well, if you have been holding on to your paper so far, now squeeze that paper and watch this."}, {"start": 409.0, "end": 422.0, "text": " This is the reproduction and even better, we have measured data about wave amplitudes, average wave speed, iceberg sizes involved in this event, and get this."}, {"start": 422.0, "end": 427.0, "text": " This simulation is able to reproduce all of these accurately."}, {"start": 427.0, "end": 429.0, "text": " Wow!"}, {"start": 429.0, "end": 442.0, "text": " And we are still not done yet. It can also produce full 3D simulations, which requires the interplay of tens of millions of particles and can create beautiful footage like this."}, {"start": 442.0, "end": 446.0, "text": " This not only looks beautiful, but it is useful, too."}, {"start": 446.0, "end": 454.0, "text": " Look, we can even assemble a scene that reenacts what would happen if we were sitting in a boat nearby."}, {"start": 454.0, "end": 457.0, "text": " Spoiler alert, it's not fun."}, {"start": 457.0, "end": 467.0, "text": " So, there we go. Some of these computer graphics simulations are so accurate they can come out to the real world and even teach us new things."}, {"start": 467.0, "end": 476.0, "text": " Reading papers makes me very, very happy and this was no exception. I had a fantastic time reading this paper."}, {"start": 476.0, "end": 481.0, "text": " If you wish to have a great time, too, make sure to check it out in the video description."}, {"start": 481.0, "end": 488.0, "text": " The SAP T-Labs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible."}, {"start": 488.0, "end": 498.0, "text": " This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it."}, {"start": 498.0, "end": 502.0, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 502.0, "end": 512.0, "text": " It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically."}, {"start": 512.0, "end": 518.0, "text": " I only wish I had a tool like this when I was working on my neural networks during my PhD years."}, {"start": 518.0, "end": 525.0, "text": " Visit perceptilabs.com slash papers to easily install the free local version of their system today."}, {"start": 525.0, "end": 530.0, "text": " Our thanks to perceptilabs for their support and for helping us make better videos for you."}, {"start": 530.0, "end": 534.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=BS2la3C-TYc
This Image Is Fine. Completely Fine. 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "The Sensory Neuron as a Transformer: Permutation-Invariant Neural Networks for Reinforcement Learning" is available here: https://attentionneuron.github.io/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karojona Ifehir. Today, we are going to find out whether a machine can really think like we humans think. The answer is yes, and no. Let me try to explain. Obviously, there are many differences between how we think, but first, let's try to argue for the similarities. First, this neural network looks at an image and tries to decide whether it depicts a dog or not. What this does is that it slices up the image into small pieces and keeps a score on what it had seen in these snippets. Floppy ears, black snout, fur, okay, we're good. We can conclude that we have a dog over here. We humans would also have a hard time identifying a dog without these landmarks, plus one for thinking the same way. Second, this is DeepMind's deep reinforcement learning algorithm. It looks at the screen much like a human would and tries to learn what the controls do and what the game is about as the game is running. And much like a human, first it has no idea what is going on and loses all of its lives almost immediately. But, over time, it gets a bit of the feel of the game. Improvements, good. But, if we wait for longer, it sharpens its skill set so much that, look, it found out that the best way to beat the game is to dig a tunnel through the blocks and just kick back and enjoy the show. Human like, excellent. Again, plus one for thinking the same way. Now, let's look at this new permutation invariant neural network and see what the name means, what it can do, and how it relates to our thinking. Experiment number one, permutations. A permutation means shuffling things around. And we will add shuffling into this card pole balancing experiment. Here, the learning algorithm does not look at the pixels of the game, but instead takes a look at numbers, for instance, angles, velocity and position. And, as you see, with those, it learned to balance the pole super quickly. The permutation part means that we shuffle this information every now and then. That will surely confuse it, so let's try that. There we go, and... Nice! Didn't even lose it. Shuffle again. It lost it due to a sudden change in the incoming data, but, look, it recovered rapidly. And, can it keep it upright? Yes, it can. So, is this a plus one or a minus one? Is this human thinking or robot thinking? Well, over time, humans can get used to input information switching around too. But, not this quickly. So, this one is debatable. However, I guarantee that the next experiment will not be debatable at all. Now, experiment number two, reshuffling on steroids. We already learned that some amount of reshuffling is okay. So, now, let's have our little AI play pong, but with a twist, because this time, the reshuffling is getting real. Yes, we now broke up the screen into small little blocks, and have reshuffled it to the point that it is impossible to read. But, you know what? Let's make it even worse. Instead of just reshuffling, we will reshuffle the reshuffling. What does that mean? We can rearrange these styles every few seconds. A true nightmare situation for even an established algorithm, and especially when we are learning the game. Okay, this is nonsense, right? There is no way anyone can meaningfully play the game from this noise, right? And now, hold on to your papers, because the learning algorithm still works fine. Just fine. Not only on pong, but on a racing game too. Whoa! A big minus one for human thinking. But, if it works fine, you know exactly what needs to be done. Yes, let's make it even harder. Experiment number three, stolen blocks. Yes, let's keep reshuffling, change the reshuffling over time, and also steal 70% of the data. And... Wow! It is still fine. It only sees 30% of the game all jumbled up, and it still plays just fine. I cannot believe what I am seeing here. Another minus one. This does not seem to think like a human would. So all that is absolutely amazing. But, what is it looking at? Aha! See the white blocks? It is looking at the sides of the road, likely to know what the curvature is, and how to drive it. And, look, only occasionally it peeps at the green patches too. So, does this mean what I think it means? Experiment number four. If you have been holding onto your paper so far, now squeeze that paper, shuffling, and let's shovel in some additional useless complexity, which will take the form of this background. And... My goodness! It still works just fine, and the minus ones just keep on coming. So, this was quite a ride. But, what is the conclusion here? Well, learning algorithms show some ways in which they think like we think, but the answer is no, do not think of a neural network or a reinforcement learner as a digital copy of the brain. Not even close. Now, even better, this is not just a fantastic thought experiment, all this has utility. For instance, in his lecture, one of the authors, David Ha, notes that humans can also get upside down goggles, or bicycles where the left and right directions are flipped. And, if they do, it takes a great deal of time for the human to adapt. For the neural network, no issues whatsoever. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And, hold onto your papers, because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. For current researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karojona Ifehir."}, {"start": 4.76, "end": 12.08, "text": " Today, we are going to find out whether a machine can really think like we humans think."}, {"start": 12.08, "end": 15.0, "text": " The answer is yes, and no."}, {"start": 15.0, "end": 16.76, "text": " Let me try to explain."}, {"start": 16.76, "end": 20.6, "text": " Obviously, there are many differences between how we think,"}, {"start": 20.6, "end": 24.6, "text": " but first, let's try to argue for the similarities."}, {"start": 24.6, "end": 32.4, "text": " First, this neural network looks at an image and tries to decide whether it depicts a dog or not."}, {"start": 32.4, "end": 40.8, "text": " What this does is that it slices up the image into small pieces and keeps a score on what it had seen in these snippets."}, {"start": 40.8, "end": 46.8, "text": " Floppy ears, black snout, fur, okay, we're good."}, {"start": 46.8, "end": 49.8, "text": " We can conclude that we have a dog over here."}, {"start": 49.8, "end": 58.599999999999994, "text": " We humans would also have a hard time identifying a dog without these landmarks, plus one for thinking the same way."}, {"start": 58.599999999999994, "end": 63.0, "text": " Second, this is DeepMind's deep reinforcement learning algorithm."}, {"start": 63.0, "end": 73.0, "text": " It looks at the screen much like a human would and tries to learn what the controls do and what the game is about as the game is running."}, {"start": 73.0, "end": 81.8, "text": " And much like a human, first it has no idea what is going on and loses all of its lives almost immediately."}, {"start": 81.8, "end": 87.8, "text": " But, over time, it gets a bit of the feel of the game."}, {"start": 87.8, "end": 89.8, "text": " Improvements, good."}, {"start": 89.8, "end": 95.8, "text": " But, if we wait for longer, it sharpens its skill set so much that,"}, {"start": 95.8, "end": 105.8, "text": " look, it found out that the best way to beat the game is to dig a tunnel through the blocks and just kick back and enjoy the show."}, {"start": 105.8, "end": 108.2, "text": " Human like, excellent."}, {"start": 108.2, "end": 111.6, "text": " Again, plus one for thinking the same way."}, {"start": 111.6, "end": 122.8, "text": " Now, let's look at this new permutation invariant neural network and see what the name means, what it can do, and how it relates to our thinking."}, {"start": 122.8, "end": 126.39999999999999, "text": " Experiment number one, permutations."}, {"start": 126.39999999999999, "end": 129.6, "text": " A permutation means shuffling things around."}, {"start": 129.6, "end": 134.4, "text": " And we will add shuffling into this card pole balancing experiment."}, {"start": 134.4, "end": 146.0, "text": " Here, the learning algorithm does not look at the pixels of the game, but instead takes a look at numbers, for instance, angles, velocity and position."}, {"start": 146.0, "end": 151.6, "text": " And, as you see, with those, it learned to balance the pole super quickly."}, {"start": 151.6, "end": 156.79999999999998, "text": " The permutation part means that we shuffle this information every now and then."}, {"start": 156.79999999999998, "end": 161.2, "text": " That will surely confuse it, so let's try that."}, {"start": 161.2, "end": 164.2, "text": " There we go, and..."}, {"start": 164.2, "end": 167.6, "text": " Nice! Didn't even lose it."}, {"start": 167.6, "end": 169.6, "text": " Shuffle again."}, {"start": 169.6, "end": 178.0, "text": " It lost it due to a sudden change in the incoming data, but, look, it recovered rapidly."}, {"start": 178.0, "end": 180.79999999999998, "text": " And, can it keep it upright?"}, {"start": 180.8, "end": 182.4, "text": " Yes, it can."}, {"start": 182.4, "end": 186.4, "text": " So, is this a plus one or a minus one?"}, {"start": 186.4, "end": 190.0, "text": " Is this human thinking or robot thinking?"}, {"start": 190.0, "end": 196.0, "text": " Well, over time, humans can get used to input information switching around too."}, {"start": 196.0, "end": 198.20000000000002, "text": " But, not this quickly."}, {"start": 198.20000000000002, "end": 200.8, "text": " So, this one is debatable."}, {"start": 200.8, "end": 206.4, "text": " However, I guarantee that the next experiment will not be debatable at all."}, {"start": 206.4, "end": 211.8, "text": " Now, experiment number two, reshuffling on steroids."}, {"start": 211.8, "end": 216.20000000000002, "text": " We already learned that some amount of reshuffling is okay."}, {"start": 216.20000000000002, "end": 226.8, "text": " So, now, let's have our little AI play pong, but with a twist, because this time, the reshuffling is getting real."}, {"start": 226.8, "end": 236.20000000000002, "text": " Yes, we now broke up the screen into small little blocks, and have reshuffled it to the point that it is impossible to read."}, {"start": 236.2, "end": 238.0, "text": " But, you know what?"}, {"start": 238.0, "end": 240.2, "text": " Let's make it even worse."}, {"start": 240.2, "end": 245.6, "text": " Instead of just reshuffling, we will reshuffle the reshuffling."}, {"start": 245.6, "end": 247.0, "text": " What does that mean?"}, {"start": 247.0, "end": 251.2, "text": " We can rearrange these styles every few seconds."}, {"start": 251.2, "end": 259.0, "text": " A true nightmare situation for even an established algorithm, and especially when we are learning the game."}, {"start": 259.0, "end": 262.2, "text": " Okay, this is nonsense, right?"}, {"start": 262.2, "end": 267.8, "text": " There is no way anyone can meaningfully play the game from this noise, right?"}, {"start": 267.8, "end": 273.59999999999997, "text": " And now, hold on to your papers, because the learning algorithm still works fine."}, {"start": 273.59999999999997, "end": 276.8, "text": " Just fine."}, {"start": 276.8, "end": 280.8, "text": " Not only on pong, but on a racing game too."}, {"start": 280.8, "end": 282.59999999999997, "text": " Whoa!"}, {"start": 282.59999999999997, "end": 286.0, "text": " A big minus one for human thinking."}, {"start": 286.0, "end": 291.6, "text": " But, if it works fine, you know exactly what needs to be done."}, {"start": 291.6, "end": 294.6, "text": " Yes, let's make it even harder."}, {"start": 294.6, "end": 298.2, "text": " Experiment number three, stolen blocks."}, {"start": 298.2, "end": 307.6, "text": " Yes, let's keep reshuffling, change the reshuffling over time, and also steal 70% of the data."}, {"start": 307.6, "end": 310.6, "text": " And..."}, {"start": 310.6, "end": 311.6, "text": " Wow!"}, {"start": 311.6, "end": 320.6, "text": " It is still fine. It only sees 30% of the game all jumbled up, and it still plays just fine."}, {"start": 320.6, "end": 323.6, "text": " I cannot believe what I am seeing here."}, {"start": 323.6, "end": 325.20000000000005, "text": " Another minus one."}, {"start": 325.20000000000005, "end": 328.6, "text": " This does not seem to think like a human would."}, {"start": 328.6, "end": 331.6, "text": " So all that is absolutely amazing."}, {"start": 331.6, "end": 334.6, "text": " But, what is it looking at?"}, {"start": 334.6, "end": 335.6, "text": " Aha!"}, {"start": 335.6, "end": 344.20000000000005, "text": " See the white blocks? It is looking at the sides of the road, likely to know what the curvature is, and how to drive it."}, {"start": 344.20000000000005, "end": 350.6, "text": " And, look, only occasionally it peeps at the green patches too."}, {"start": 350.6, "end": 353.6, "text": " So, does this mean what I think it means?"}, {"start": 353.6, "end": 355.6, "text": " Experiment number four."}, {"start": 355.6, "end": 361.6, "text": " If you have been holding onto your paper so far, now squeeze that paper, shuffling,"}, {"start": 361.6, "end": 369.6, "text": " and let's shovel in some additional useless complexity, which will take the form of this background."}, {"start": 369.6, "end": 370.6, "text": " And..."}, {"start": 373.6, "end": 380.6, "text": " My goodness! It still works just fine, and the minus ones just keep on coming."}, {"start": 380.6, "end": 383.6, "text": " So, this was quite a ride."}, {"start": 383.6, "end": 385.6, "text": " But, what is the conclusion here?"}, {"start": 385.6, "end": 393.6, "text": " Well, learning algorithms show some ways in which they think like we think, but the answer is no,"}, {"start": 393.6, "end": 399.6, "text": " do not think of a neural network or a reinforcement learner as a digital copy of the brain."}, {"start": 399.6, "end": 401.6, "text": " Not even close."}, {"start": 401.6, "end": 408.6, "text": " Now, even better, this is not just a fantastic thought experiment, all this has utility."}, {"start": 408.6, "end": 415.6, "text": " For instance, in his lecture, one of the authors, David Ha, notes that humans can also get upside down goggles,"}, {"start": 415.6, "end": 420.6, "text": " or bicycles where the left and right directions are flipped."}, {"start": 420.6, "end": 425.6, "text": " And, if they do, it takes a great deal of time for the human to adapt."}, {"start": 425.6, "end": 429.6, "text": " For the neural network, no issues whatsoever."}, {"start": 429.6, "end": 431.6, "text": " What a time to be alive!"}, {"start": 431.6, "end": 434.6, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 434.6, "end": 440.6, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 440.6, "end": 447.6, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances."}, {"start": 447.6, "end": 454.6, "text": " And, hold onto your papers, because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 454.6, "end": 460.6, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 460.6, "end": 466.6, "text": " For current researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 466.6, "end": 468.6, "text": " workstations, or servers."}, {"start": 468.6, "end": 475.6, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 475.6, "end": 480.6, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 480.6, "end": 490.6, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9L5NqNDZHjk
Finally, Beautiful Virtual Scenes…For Less! ☀️
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/latentspace/published-work/The-Science-of-Debugging-with-W-B-Reports--Vmlldzo4OTI3Ng 📝 The paper "DONeRF: Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks" is available here: https://depthoraclenerf.github.io/ Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1477041/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
And dear fellow scholars, this is two minute papers with Dr. Karo Ejona Ifehir. Today we are going to see the incredible piece of progress in research works on making our photos come to life. How do we do that? Well, of course, through view synthesis. To be more exact, we do that through this amazing technique that is referred to as a Nerf variant, which means that it is a learning based algorithm that tries to reproduce real world scenes from only a few views. In goal, a few photos of a scene, and it has to be able to synthesize new photorealistic images in between these photos. This is view synthesis in short. As you see here, it can be done quite well with the previous method. So, our question number one is, why bother publishing a new research paper on this? And question number two, there are plenty of view synthesis papers sloshing around, so why choose this paper? Well, this new technique is called Dio Nerf, Nerf via depth oracles. One of the key contributions here is that it is better at predicting how far things are from our camera, and it also takes less time to evaluate than its predecessors. It still makes thin structures a little murky, look at the tree here, but otherwise the rest seems close to the reference. So, is there value in this? Let's look at the results and see for ourselves. We will compare to the original Nerf technique first, and the results are comparable. What is going on? Is this not supposed to be better? Do these depth oracles help at all? Why is this just comparable to Nerf? Now, hold on to your papers, because here comes the key. The output of the two techniques may be comparable, but the input isn't. What does that mean? It means that the new technique was given 50 times less information. Whoa! 50 times less. That's barely anything. And when I read the paper, this was the point where I immediately went from a little disappointed to stand. 50 times less information, and it can still create comparable videos. Yes, the answer is yes, the depth oracles really work. And it does not end there, there's more. It was also compared against local lightfield fusion, which is from two years ago, and the results are much cleaner. And it is also compared against... What? This is the Neuro-Basis Expansion technique next in short. We just showcased it approximately two months ago, and not only an excellent follow-up paper appeared in the same year, just a few months' support, but it already compares to the previous work, and it outperforms it handily. My goodness, the pace of progress in machine learning research never disappoints. And here comes the best part. If you have been holding onto your paper so far, now squeeze that paper, because the new technique not only requires 50 times less information, no, no, it is also nearly 50 times cheaper and faster to train the new neural network and to create the new images. So, let's pop the question. Is it real time? Yes, all this runs in real time. Look, if we wish to walk around in a photorealistic virtual scene, normally we would have to write a light simulation program and compute every single image separately, where each image would take several minutes to finish. Now we just need to shoot a few rays, and the neural network will try to understand the data, and give us the rest instantly. What a time to be alive! And interestingly, look, the first author of this paper is Thomas Neff, who wrote this variant on Nerf. No men as stone men, I guess. Congratulations! And if you wish to discuss this paper, make sure to drop by on our Discord server. The link is available in the video description. This episode has been supported by weights and biases. In this post, they show you how to use their tool to check and visualize what your neural network is learning, and even more importantly, a case study on how to find bugs in your system and fix them. During my PhD studies, I trained a ton of neural networks, which were used in our experiments. However, over time, there was just too much data in our repositories, and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " And dear fellow scholars, this is two minute papers with Dr. Karo Ejona Ifehir."}, {"start": 4.72, "end": 12.8, "text": " Today we are going to see the incredible piece of progress in research works on making our photos come to life."}, {"start": 12.8, "end": 14.8, "text": " How do we do that?"}, {"start": 14.8, "end": 18.8, "text": " Well, of course, through view synthesis."}, {"start": 18.8, "end": 25.8, "text": " To be more exact, we do that through this amazing technique that is referred to as a Nerf variant,"}, {"start": 25.8, "end": 34.4, "text": " which means that it is a learning based algorithm that tries to reproduce real world scenes from only a few views."}, {"start": 34.4, "end": 44.900000000000006, "text": " In goal, a few photos of a scene, and it has to be able to synthesize new photorealistic images in between these photos."}, {"start": 44.900000000000006, "end": 47.400000000000006, "text": " This is view synthesis in short."}, {"start": 47.400000000000006, "end": 51.8, "text": " As you see here, it can be done quite well with the previous method."}, {"start": 51.8, "end": 58.599999999999994, "text": " So, our question number one is, why bother publishing a new research paper on this?"}, {"start": 58.599999999999994, "end": 67.3, "text": " And question number two, there are plenty of view synthesis papers sloshing around, so why choose this paper?"}, {"start": 67.3, "end": 73.9, "text": " Well, this new technique is called Dio Nerf, Nerf via depth oracles."}, {"start": 73.9, "end": 80.7, "text": " One of the key contributions here is that it is better at predicting how far things are from our camera,"}, {"start": 80.7, "end": 86.3, "text": " and it also takes less time to evaluate than its predecessors."}, {"start": 86.3, "end": 96.0, "text": " It still makes thin structures a little murky, look at the tree here, but otherwise the rest seems close to the reference."}, {"start": 96.0, "end": 98.7, "text": " So, is there value in this?"}, {"start": 98.7, "end": 101.80000000000001, "text": " Let's look at the results and see for ourselves."}, {"start": 101.80000000000001, "end": 110.2, "text": " We will compare to the original Nerf technique first, and the results are comparable."}, {"start": 110.2, "end": 112.0, "text": " What is going on?"}, {"start": 112.0, "end": 114.4, "text": " Is this not supposed to be better?"}, {"start": 114.4, "end": 117.2, "text": " Do these depth oracles help at all?"}, {"start": 117.2, "end": 120.7, "text": " Why is this just comparable to Nerf?"}, {"start": 120.7, "end": 124.9, "text": " Now, hold on to your papers, because here comes the key."}, {"start": 124.9, "end": 131.1, "text": " The output of the two techniques may be comparable, but the input isn't."}, {"start": 131.1, "end": 132.6, "text": " What does that mean?"}, {"start": 132.6, "end": 138.0, "text": " It means that the new technique was given 50 times less information."}, {"start": 138.0, "end": 139.7, "text": " Whoa!"}, {"start": 139.7, "end": 141.7, "text": " 50 times less."}, {"start": 141.7, "end": 143.7, "text": " That's barely anything."}, {"start": 143.7, "end": 151.7, "text": " And when I read the paper, this was the point where I immediately went from a little disappointed to stand."}, {"start": 151.7, "end": 157.2, "text": " 50 times less information, and it can still create comparable videos."}, {"start": 157.2, "end": 162.2, "text": " Yes, the answer is yes, the depth oracles really work."}, {"start": 162.2, "end": 165.2, "text": " And it does not end there, there's more."}, {"start": 165.2, "end": 175.2, "text": " It was also compared against local lightfield fusion, which is from two years ago, and the results are much cleaner."}, {"start": 175.2, "end": 178.2, "text": " And it is also compared against..."}, {"start": 178.2, "end": 179.7, "text": " What?"}, {"start": 179.7, "end": 184.7, "text": " This is the Neuro-Basis Expansion technique next in short."}, {"start": 184.7, "end": 194.2, "text": " We just showcased it approximately two months ago, and not only an excellent follow-up paper appeared in the same year,"}, {"start": 194.2, "end": 203.2, "text": " just a few months' support, but it already compares to the previous work, and it outperforms it handily."}, {"start": 203.2, "end": 208.7, "text": " My goodness, the pace of progress in machine learning research never disappoints."}, {"start": 208.7, "end": 211.2, "text": " And here comes the best part."}, {"start": 211.2, "end": 220.7, "text": " If you have been holding onto your paper so far, now squeeze that paper, because the new technique not only requires 50 times less information,"}, {"start": 220.7, "end": 231.2, "text": " no, no, it is also nearly 50 times cheaper and faster to train the new neural network and to create the new images."}, {"start": 231.2, "end": 234.2, "text": " So, let's pop the question."}, {"start": 234.2, "end": 236.2, "text": " Is it real time?"}, {"start": 236.2, "end": 239.7, "text": " Yes, all this runs in real time."}, {"start": 239.7, "end": 244.2, "text": " Look, if we wish to walk around in a photorealistic virtual scene,"}, {"start": 244.2, "end": 255.2, "text": " normally we would have to write a light simulation program and compute every single image separately, where each image would take several minutes to finish."}, {"start": 255.2, "end": 264.7, "text": " Now we just need to shoot a few rays, and the neural network will try to understand the data, and give us the rest instantly."}, {"start": 264.7, "end": 267.2, "text": " What a time to be alive!"}, {"start": 267.2, "end": 276.2, "text": " And interestingly, look, the first author of this paper is Thomas Neff, who wrote this variant on Nerf."}, {"start": 276.2, "end": 278.2, "text": " No men as stone men, I guess."}, {"start": 278.2, "end": 279.7, "text": " Congratulations!"}, {"start": 279.7, "end": 284.7, "text": " And if you wish to discuss this paper, make sure to drop by on our Discord server."}, {"start": 284.7, "end": 287.7, "text": " The link is available in the video description."}, {"start": 287.7, "end": 291.2, "text": " This episode has been supported by weights and biases."}, {"start": 291.2, "end": 303.7, "text": " In this post, they show you how to use their tool to check and visualize what your neural network is learning, and even more importantly, a case study on how to find bugs in your system and fix them."}, {"start": 303.7, "end": 309.2, "text": " During my PhD studies, I trained a ton of neural networks, which were used in our experiments."}, {"start": 309.2, "end": 317.7, "text": " However, over time, there was just too much data in our repositories, and what I am looking for is not data, but insight."}, {"start": 317.7, "end": 322.7, "text": " And that's exactly how weights and biases helps you by organizing your experiments."}, {"start": 322.7, "end": 330.7, "text": " It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more."}, {"start": 330.7, "end": 337.7, "text": " And get this, weight and biases is free for all individuals, academics, and open source projects."}, {"start": 337.7, "end": 346.7, "text": " Make sure to visit them through wnba.com slash papers, or just click the link in the video description, and you can get a free demo today."}, {"start": 346.7, "end": 352.7, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you."}, {"start": 352.7, "end": 379.7, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ia-VBSF4KXA
This New Method Can Simulate a Vast Ocean! 🌊
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Ships, Splashes, and Waves on a Vast Ocean" is available here: http://computationalsciences.org/publications/huang-2021-vast-ocean.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to put a fluid simulation in another fluid simulation, and with that create beautiful videos like this one. But how? And more importantly, why? Well, let's take two fluid simulation techniques. One will be the fluid implicit particle, flayp in short, and the other will be the boundary element method, BEM. So, why do we need two, or perhaps even more methods? Why not just one? Well, first, if we wish to simulate turbulence and high frequency splashes near moving objects, flayp is the answer. It is great at exactly that. However, it is not great at simulating big volumes of water. No matter because there are other methods to handle that, for instance, the boundary element method that we just mentioned, BEM in short. It is great in these cases because the BEM variant that's been used in this paper simulates only the surface of the liquid, and for a large ocean, the surface is much, much smaller than the volume. Let's have a look at an example. Here is a pure flip simulation. This should be great for small splashes. Yes, that is indeed true, but look, the waves then disappear quickly. Let's look at what BEM does with this scene. Yes, the details are lacking, but the waves are lovely. Now, we have a good feel of the limitations of these techniques, small splashes flip, oceans BEM. But here is the problem. What if we have a scene where we have both? Which one should we use? This new technique says, well, use both. What is this insanity? Now, hold on to your papers and look. This is the result of using the two simulation techniques together. Now, if you look carefully, yes, you guessed it right. Within the box, there is a flip simulation, and outside of the boxes, there is a BEM simulation. And the two are fused together in a way that the new method really takes the best of both worlds. Just look at all that detail. What a beautiful simulation. My goodness. Now, this is not nearly as easy as slapping together two simulation domains. Look, there is plenty of work to be done in the transition zone. And also, how accurate is this? Here is the reference simulation for the water droplet scene from earlier. This simulation would take forever for a big scene, and is here for us to know how it should look. And let's see how close the new method is to it. Whoa! Now we're talking. Now, worry not about the seams. They are there for us to see the internal workings of the algorithm. The final composition will look like this. However, not even this technique is perfect as a potential limitation. Look, here is the new method compared to the reference footage. The waves in the wake of the ship are simulated really well, and so are the waves further away at the same time. That's amazing. However, what we don't get is, look, crisp details in the BEM regions. Those are gone. But just compare the results to this technique from a few years ago and get a feel of how far we have come just a couple papers down the line. The pace of progress in computer graphics research is through the roof, loving it. What a time to be alive. And let's see, yes, the first author is Liboh Wang. Again, if you are a seasoned fellow scholar, you may remember our video on his first paper on Faro Fluids. And this is his third one. This man writes nothing but masterpieces. As a result, this paper has been accepted to the Cigarath Asia Conference, and being a first author there is perhaps the computer graphics equivalent of winning the Olympic gold medal. It is also beautifully written, so make sure to check it out in the video description. Huge congratulations on this amazing work. I cannot believe that we are still progressing so quickly year after year. PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptiLabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 10.24, "text": " Today we are going to put a fluid simulation in another fluid simulation,"}, {"start": 10.24, "end": 14.24, "text": " and with that create beautiful videos like this one."}, {"start": 14.96, "end": 16.64, "text": " But how?"}, {"start": 16.64, "end": 19.04, "text": " And more importantly, why?"}, {"start": 19.76, "end": 23.04, "text": " Well, let's take two fluid simulation techniques."}, {"start": 23.04, "end": 26.080000000000002, "text": " One will be the fluid implicit particle,"}, {"start": 26.08, "end": 31.52, "text": " flayp in short, and the other will be the boundary element method, BEM."}, {"start": 32.239999999999995, "end": 38.08, "text": " So, why do we need two, or perhaps even more methods? Why not just one?"}, {"start": 38.64, "end": 45.68, "text": " Well, first, if we wish to simulate turbulence and high frequency splashes near moving objects,"}, {"start": 45.68, "end": 49.28, "text": " flayp is the answer. It is great at exactly that."}, {"start": 49.84, "end": 54.0, "text": " However, it is not great at simulating big volumes of water."}, {"start": 54.0, "end": 60.32, "text": " No matter because there are other methods to handle that, for instance, the boundary element method"}, {"start": 60.32, "end": 66.96000000000001, "text": " that we just mentioned, BEM in short. It is great in these cases because the BEM variant"}, {"start": 66.96000000000001, "end": 71.68, "text": " that's been used in this paper simulates only the surface of the liquid,"}, {"start": 72.24000000000001, "end": 77.28, "text": " and for a large ocean, the surface is much, much smaller than the volume."}, {"start": 78.8, "end": 83.03999999999999, "text": " Let's have a look at an example. Here is a pure flip simulation."}, {"start": 83.04, "end": 92.64, "text": " This should be great for small splashes. Yes, that is indeed true, but look, the waves then disappear quickly."}, {"start": 94.80000000000001, "end": 97.68, "text": " Let's look at what BEM does with this scene."}, {"start": 100.56, "end": 104.16000000000001, "text": " Yes, the details are lacking, but the waves are lovely."}, {"start": 104.96000000000001, "end": 109.84, "text": " Now, we have a good feel of the limitations of these techniques, small splashes flip,"}, {"start": 109.84, "end": 118.16, "text": " oceans BEM. But here is the problem. What if we have a scene where we have both?"}, {"start": 118.16, "end": 123.76, "text": " Which one should we use? This new technique says, well, use both."}, {"start": 123.76, "end": 130.4, "text": " What is this insanity? Now, hold on to your papers and look."}, {"start": 130.4, "end": 135.52, "text": " This is the result of using the two simulation techniques together."}, {"start": 135.52, "end": 142.8, "text": " Now, if you look carefully, yes, you guessed it right. Within the box, there is a flip simulation,"}, {"start": 142.8, "end": 151.76000000000002, "text": " and outside of the boxes, there is a BEM simulation. And the two are fused together in a way that the new"}, {"start": 151.76000000000002, "end": 159.52, "text": " method really takes the best of both worlds. Just look at all that detail. What a beautiful simulation."}, {"start": 159.52, "end": 167.28, "text": " My goodness. Now, this is not nearly as easy as slapping together two simulation domains."}, {"start": 168.8, "end": 175.12, "text": " Look, there is plenty of work to be done in the transition zone. And also, how accurate is this?"}, {"start": 175.84, "end": 181.04000000000002, "text": " Here is the reference simulation for the water droplet scene from earlier. This simulation"}, {"start": 181.04, "end": 189.92, "text": " would take forever for a big scene, and is here for us to know how it should look. And let's see"}, {"start": 189.92, "end": 200.48, "text": " how close the new method is to it. Whoa! Now we're talking. Now, worry not about the seams."}, {"start": 200.48, "end": 207.04, "text": " They are there for us to see the internal workings of the algorithm. The final composition will look"}, {"start": 207.04, "end": 215.84, "text": " like this. However, not even this technique is perfect as a potential limitation. Look,"}, {"start": 215.84, "end": 222.16, "text": " here is the new method compared to the reference footage. The waves in the wake of the ship are"}, {"start": 222.16, "end": 231.28, "text": " simulated really well, and so are the waves further away at the same time. That's amazing. However,"}, {"start": 231.28, "end": 240.16, "text": " what we don't get is, look, crisp details in the BEM regions. Those are gone. But just compare"}, {"start": 240.16, "end": 246.32, "text": " the results to this technique from a few years ago and get a feel of how far we have come"}, {"start": 246.32, "end": 252.4, "text": " just a couple papers down the line. The pace of progress in computer graphics research is through"}, {"start": 252.4, "end": 261.68, "text": " the roof, loving it. What a time to be alive. And let's see, yes, the first author is Liboh Wang."}, {"start": 262.64, "end": 268.8, "text": " Again, if you are a seasoned fellow scholar, you may remember our video on his first paper"}, {"start": 268.8, "end": 277.68, "text": " on Faro Fluids. And this is his third one. This man writes nothing but masterpieces. As a result,"}, {"start": 277.68, "end": 283.84000000000003, "text": " this paper has been accepted to the Cigarath Asia Conference, and being a first author there"}, {"start": 283.84000000000003, "end": 290.32, "text": " is perhaps the computer graphics equivalent of winning the Olympic gold medal. It is also"}, {"start": 290.32, "end": 296.08, "text": " beautifully written, so make sure to check it out in the video description. Huge congratulations"}, {"start": 296.08, "end": 303.12, "text": " on this amazing work. I cannot believe that we are still progressing so quickly year after year."}, {"start": 303.12, "end": 310.08, "text": " PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as"}, {"start": 310.08, "end": 315.92, "text": " intuitive as possible. This gives you a faster way to build out models with more transparency"}, {"start": 315.92, "end": 322.64, "text": " into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle"}, {"start": 322.64, "end": 328.32, "text": " between the visual modeler and the code editor. It even generates visualizations for all the"}, {"start": 328.32, "end": 334.48, "text": " model variables and gives you recommendations both during modeling and training and does all this"}, {"start": 334.48, "end": 340.08, "text": " automatically. I only wish I had a tool like this when I was working on my neural networks during"}, {"start": 340.08, "end": 347.03999999999996, "text": " my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their"}, {"start": 347.03999999999996, "end": 352.8, "text": " system today. Our thanks to perceptiLabs for their support and for helping us make better videos"}, {"start": 352.8, "end": 360.40000000000003, "text": " for you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=IXqj4HqNbPE
3D Modeling This Toaster Just Became Easier!
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "DAG Amendment for Inverse Control of Parametric Shapes" is available here: https://perso.telecom-paristech.fr/boubek/papers/DAG_Amendment/ Check out Yannic Kilcher's channel here: https://www.youtube.com/c/YannicKilcher/videos 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to build the best virtual toaster that you have ever seen. And it's going to be so easy that it hardly seems possible. All this technique asks for is our input geometry to be a collection of parametric shapes. I'll tell you in a moment what that is, but for now let's see the toaster. Hmm, this looks fine, but what if we feel that it is not quite tall enough? Well, with this technique, look, it can change the height of it, which is fantastic, but something else also happens. Look, it also understands the body's relation to the other objects that are connected to it. We can also change the location of the handle, the slits can be adjusted symmetrically, and when we move the toaster, it understands that it moves together with the handles. This is super useful. For instance, have a look at this training example where if we change the wheels, it also understands that not only the wheels, but the wheel wells also have to change as well. This concept also works really well on this curtain. And all this means that we can not only dream up and execute these changes ourselves without having to ask a trained artist, but we can also do it super quickly and efficiently. And loving it, just to demonstrate how profound and non-trivial this understanding of interrelations is, here is an example of a complex object. Without this technique, if we grab one thing, exactly that one thing moves, which is represented by one of these sliders changing here. However, if we would grab this contraption in the real world, not only one thing would move, nearly every part would move at the same time. So, does this new method know that? Oh wow, it does! Look at that! And at the same time, not one, but many sliders are dancing around beautifully. Now, I mentioned that the requirement was that the input object has to be a parametric shape. This is something that can be generated from intuitive parameters. For instance, we can generate a circle if we say what the radius of the circle should be. The radius would be the parameter here, and the resulting circle is hence a parametric shape. In many domains, this is standard procedure, for instance, many computer-added design systems work with parametric objects. But we are not done yet, not even close. It also understands how the brush sizes that we use relates to our thinking. Don't believe it, let's have a look together. Right after we click, it detects that we have a small brush size, and therefore, in first, that we probably wish to do something with the handle, and there we go! That is really cool! And now, let's increase the brush size, and click nearby, and bam! There we go! Now it knows that we wish to interact with the drawer. Same with the doors. And hold on to your papers, because to demonstrate the utility of their technique, the authors also made a scene just for us. Look! Nice! This is no less than a two-minute paper-s branded chronometer. And here we can change the proportions, the dial, the hands, whatever we wish, and it is so easy and showcases the utility of the method so well. Now, I know what you are thinking, let's see it, taking! And will it be two minutes? Well, close enough. Certainly much closer to two minutes than the length of these videos, that is for sure. Thank you so much for the authors for taking their time off their busy day, just to make this. So, this is a truly wonderful tool, because even a novice artist, without 3D modeling expertise, can apply meaningful changes to complex pieces of geometry. No trained artist is required. What a time to be alive! Now, this didn't quite fit anywhere in this video, but I really wanted to show you this heartwarming message from Mark Chen, a research scientist at OpenAI. This really showcases one of the best parts of my job, and that is, when the authors of the paper come in, and enjoy the results with you fellow scholars. Loving it! Thank you so much again! Also, make sure to check out Yannick's channel for cool, in-depth videos on machine learning works. The link is available in the video description. This video has been supported by weights and biases. Without the recent offering, fully connected, a place where they bring machine learning practitioners together, to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is! Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnbe.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 5.0, "end": 10.0, "text": " Today we are going to build the best virtual toaster that you have ever seen."}, {"start": 10.0, "end": 15.0, "text": " And it's going to be so easy that it hardly seems possible."}, {"start": 15.0, "end": 22.0, "text": " All this technique asks for is our input geometry to be a collection of parametric shapes."}, {"start": 22.0, "end": 28.0, "text": " I'll tell you in a moment what that is, but for now let's see the toaster."}, {"start": 28.0, "end": 34.0, "text": " Hmm, this looks fine, but what if we feel that it is not quite tall enough?"}, {"start": 34.0, "end": 40.0, "text": " Well, with this technique, look, it can change the height of it, which is fantastic,"}, {"start": 40.0, "end": 43.0, "text": " but something else also happens."}, {"start": 43.0, "end": 53.0, "text": " Look, it also understands the body's relation to the other objects that are connected to it."}, {"start": 53.0, "end": 61.0, "text": " We can also change the location of the handle, the slits can be adjusted symmetrically,"}, {"start": 61.0, "end": 68.0, "text": " and when we move the toaster, it understands that it moves together with the handles."}, {"start": 68.0, "end": 71.0, "text": " This is super useful."}, {"start": 71.0, "end": 76.0, "text": " For instance, have a look at this training example where if we change the wheels,"}, {"start": 76.0, "end": 85.0, "text": " it also understands that not only the wheels, but the wheel wells also have to change as well."}, {"start": 85.0, "end": 89.0, "text": " This concept also works really well on this curtain."}, {"start": 89.0, "end": 94.0, "text": " And all this means that we can not only dream up and execute these changes ourselves"}, {"start": 94.0, "end": 102.0, "text": " without having to ask a trained artist, but we can also do it super quickly and efficiently."}, {"start": 102.0, "end": 109.0, "text": " And loving it, just to demonstrate how profound and non-trivial this understanding of interrelations is,"}, {"start": 109.0, "end": 112.0, "text": " here is an example of a complex object."}, {"start": 112.0, "end": 117.0, "text": " Without this technique, if we grab one thing, exactly that one thing moves,"}, {"start": 117.0, "end": 122.0, "text": " which is represented by one of these sliders changing here."}, {"start": 122.0, "end": 126.0, "text": " However, if we would grab this contraption in the real world,"}, {"start": 126.0, "end": 131.0, "text": " not only one thing would move, nearly every part would move at the same time."}, {"start": 131.0, "end": 136.0, "text": " So, does this new method know that?"}, {"start": 136.0, "end": 138.0, "text": " Oh wow, it does!"}, {"start": 138.0, "end": 140.0, "text": " Look at that!"}, {"start": 140.0, "end": 146.0, "text": " And at the same time, not one, but many sliders are dancing around beautifully."}, {"start": 146.0, "end": 153.0, "text": " Now, I mentioned that the requirement was that the input object has to be a parametric shape."}, {"start": 153.0, "end": 157.0, "text": " This is something that can be generated from intuitive parameters."}, {"start": 157.0, "end": 163.0, "text": " For instance, we can generate a circle if we say what the radius of the circle should be."}, {"start": 163.0, "end": 169.0, "text": " The radius would be the parameter here, and the resulting circle is hence a parametric shape."}, {"start": 169.0, "end": 173.0, "text": " In many domains, this is standard procedure, for instance,"}, {"start": 173.0, "end": 178.0, "text": " many computer-added design systems work with parametric objects."}, {"start": 178.0, "end": 181.0, "text": " But we are not done yet, not even close."}, {"start": 181.0, "end": 186.0, "text": " It also understands how the brush sizes that we use relates to our thinking."}, {"start": 186.0, "end": 190.0, "text": " Don't believe it, let's have a look together."}, {"start": 190.0, "end": 194.0, "text": " Right after we click, it detects that we have a small brush size,"}, {"start": 194.0, "end": 199.0, "text": " and therefore, in first, that we probably wish to do something with the handle,"}, {"start": 199.0, "end": 201.0, "text": " and there we go!"}, {"start": 201.0, "end": 203.0, "text": " That is really cool!"}, {"start": 203.0, "end": 210.0, "text": " And now, let's increase the brush size, and click nearby, and bam!"}, {"start": 210.0, "end": 212.0, "text": " There we go!"}, {"start": 212.0, "end": 216.0, "text": " Now it knows that we wish to interact with the drawer."}, {"start": 216.0, "end": 218.0, "text": " Same with the doors."}, {"start": 218.0, "end": 223.0, "text": " And hold on to your papers, because to demonstrate the utility of their technique,"}, {"start": 223.0, "end": 227.0, "text": " the authors also made a scene just for us."}, {"start": 227.0, "end": 228.0, "text": " Look!"}, {"start": 228.0, "end": 229.0, "text": " Nice!"}, {"start": 229.0, "end": 234.0, "text": " This is no less than a two-minute paper-s branded chronometer."}, {"start": 234.0, "end": 240.0, "text": " And here we can change the proportions, the dial, the hands, whatever we wish,"}, {"start": 240.0, "end": 247.0, "text": " and it is so easy and showcases the utility of the method so well."}, {"start": 247.0, "end": 251.0, "text": " Now, I know what you are thinking, let's see it, taking!"}, {"start": 251.0, "end": 254.0, "text": " And will it be two minutes?"}, {"start": 254.0, "end": 256.0, "text": " Well, close enough."}, {"start": 256.0, "end": 262.0, "text": " Certainly much closer to two minutes than the length of these videos, that is for sure."}, {"start": 262.0, "end": 268.0, "text": " Thank you so much for the authors for taking their time off their busy day, just to make this."}, {"start": 268.0, "end": 273.0, "text": " So, this is a truly wonderful tool, because even a novice artist,"}, {"start": 273.0, "end": 279.0, "text": " without 3D modeling expertise, can apply meaningful changes to complex pieces of geometry."}, {"start": 279.0, "end": 282.0, "text": " No trained artist is required."}, {"start": 282.0, "end": 284.0, "text": " What a time to be alive!"}, {"start": 284.0, "end": 292.0, "text": " Now, this didn't quite fit anywhere in this video, but I really wanted to show you this heartwarming message from Mark Chen,"}, {"start": 292.0, "end": 295.0, "text": " a research scientist at OpenAI."}, {"start": 295.0, "end": 302.0, "text": " This really showcases one of the best parts of my job, and that is, when the authors of the paper come in,"}, {"start": 302.0, "end": 305.0, "text": " and enjoy the results with you fellow scholars."}, {"start": 305.0, "end": 306.0, "text": " Loving it!"}, {"start": 306.0, "end": 308.0, "text": " Thank you so much again!"}, {"start": 308.0, "end": 314.0, "text": " Also, make sure to check out Yannick's channel for cool, in-depth videos on machine learning works."}, {"start": 314.0, "end": 317.0, "text": " The link is available in the video description."}, {"start": 317.0, "end": 321.0, "text": " This video has been supported by weights and biases."}, {"start": 321.0, "end": 327.0, "text": " Without the recent offering, fully connected, a place where they bring machine learning practitioners together,"}, {"start": 327.0, "end": 335.0, "text": " to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together."}, {"start": 335.0, "end": 341.0, "text": " You see, I get messages from you fellow scholars telling me that you have been inspired by the series,"}, {"start": 341.0, "end": 344.0, "text": " but don't really know where to start."}, {"start": 344.0, "end": 346.0, "text": " And here it is!"}, {"start": 346.0, "end": 352.0, "text": " Fully connected is a great way to learn about the fundamentals, how to reproduce experiments,"}, {"start": 352.0, "end": 356.0, "text": " get your papers accepted to a conference, and more."}, {"start": 356.0, "end": 363.0, "text": " Make sure to visit them through wnbe.me slash papers, or just click the link in the video description."}, {"start": 363.0, "end": 369.0, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you."}, {"start": 369.0, "end": 376.0, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=t33jvL7ftd4
This AI Learned Some Crazy Fighting Moves! 🥊
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Neural Animation Layering for Synthesizing Martial Arts Movements" is available here: https://github.com/sebastianstarke/AI4Animation/blob/master/Media/SIGGRAPH_2021/Paper.pdf https://github.com/sebastianstarke/AI4Animation 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jona Ifehir. Today, we are going to see if a virtual AI character can learn or perhaps even invent these amazing signature moves. And this is a paper that was written by Sebastia Staka and his colleagues. He is a recurring scientist on this series, for instance, earlier he wrote this magnificent paper about dribbling AI characters. Look, the key challenge here was that we were given only three hours of unstructured motion capture data. That is, next to nothing and from this next to nothing, it not only learned these motions really well, but it could weave them together even when a specific movement combination was not present in this training data. But as these motions are created by human animators, they may show at least three problems. One, the training data may contain poses that don't quite adhere to the physics of a real human character. Two, it is possible that the upper body does something that makes sense, the lower body also does something that makes sense, but the whole thing put together does not make too much sense anymore. Or three, we may have these foot sliding artifacts that you see here. These are more common than you might first think. Here is an example of it from a previous work. And look, nearly all of the previous methods struggle with it. Now, this no work uses 20 hours of unstructured training data. Remember, the previous one only used three, so we rightfully expect that by using more information it can also learn more. But the previous work was already amazing, so what more can we really expect this new one to do? Well, it can not only learn these motions, weave together these motions like previous works, but hold on to your papers because it can now also come up with novel moves as well. Wow! This includes new attacking sequences and combining already existing attacks with novel footwork patterns, and it does all this spectacularly well. For instance, if we show it how to have it's guard up and how to throw a punch, what will it learn? Get this, it will keep it's guard up while throwing that punch. And it not only does that in a realistic fluid movement pattern, but it also found out about something that has strategic value, same with evading an attack with some head movement and counter attacking. Loving it. But how easy is it to use this? Do we need to be an AI scientist to be able to invoke these amazing motions? Well, if you have been holding on to your papers so far, now squeeze that paper and look here. Wow! You don't need to be an AI scientist to play with this, not at all. All you need is a controller to invoke these beautiful motions, and all this runs in real time. My goodness! For instance, you can crouch down and evade a potential attack by controlling the right stick and launch a punch in the meantime. And remember, not only both halves have to make sense separately, the body motion has to make sense as a whole. And it really does. Look at that! And here comes the best part, you can even assemble your own signature attacks, for instance, perform that surprise spinning backfist, an amazing spin kick. And yes, you can even go full karate kid with that crane kick. And as a cherry on top, the characters can also react to the strikes, clinch, or even try a take down. So with that, there we go. We are, again, one step closer to having access to super realistic motion techniques for virtual characters, and all we need for this is a controller. And remember, all this already runs in real time. Another amazing Cigar of Paper from Sebastian Stalker. And get this, he is currently a fourth year PhD student, and already made profound contributions to the industry. Huge congratulations on this amazing achievement. What a time to be alive! PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptiLabs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jona Ifehir."}, {"start": 4.76, "end": 12.780000000000001, "text": " Today, we are going to see if a virtual AI character can learn or perhaps even invent these"}, {"start": 12.780000000000001, "end": 15.44, "text": " amazing signature moves."}, {"start": 15.44, "end": 20.66, "text": " And this is a paper that was written by Sebastia Staka and his colleagues."}, {"start": 20.66, "end": 26.62, "text": " He is a recurring scientist on this series, for instance, earlier he wrote this magnificent"}, {"start": 26.62, "end": 29.98, "text": " paper about dribbling AI characters."}, {"start": 29.98, "end": 36.760000000000005, "text": " Look, the key challenge here was that we were given only three hours of unstructured motion"}, {"start": 36.760000000000005, "end": 38.36, "text": " capture data."}, {"start": 38.36, "end": 44.5, "text": " That is, next to nothing and from this next to nothing, it not only learned these motions"}, {"start": 44.5, "end": 51.120000000000005, "text": " really well, but it could weave them together even when a specific movement combination was"}, {"start": 51.120000000000005, "end": 54.28, "text": " not present in this training data."}, {"start": 54.28, "end": 61.56, "text": " But as these motions are created by human animators, they may show at least three problems."}, {"start": 61.56, "end": 67.06, "text": " One, the training data may contain poses that don't quite adhere to the physics of a"}, {"start": 67.06, "end": 68.76, "text": " real human character."}, {"start": 68.76, "end": 76.0, "text": " Two, it is possible that the upper body does something that makes sense, the lower body"}, {"start": 76.0, "end": 81.64, "text": " also does something that makes sense, but the whole thing put together does not make"}, {"start": 81.64, "end": 84.6, "text": " too much sense anymore."}, {"start": 84.6, "end": 90.04, "text": " Or three, we may have these foot sliding artifacts that you see here."}, {"start": 90.04, "end": 93.24, "text": " These are more common than you might first think."}, {"start": 93.24, "end": 99.4, "text": " Here is an example of it from a previous work."}, {"start": 99.4, "end": 104.04, "text": " And look, nearly all of the previous methods struggle with it."}, {"start": 104.04, "end": 109.76, "text": " Now, this no work uses 20 hours of unstructured training data."}, {"start": 109.76, "end": 116.2, "text": " Remember, the previous one only used three, so we rightfully expect that by using more"}, {"start": 116.2, "end": 119.36, "text": " information it can also learn more."}, {"start": 119.36, "end": 125.2, "text": " But the previous work was already amazing, so what more can we really expect this new"}, {"start": 125.2, "end": 126.36000000000001, "text": " one to do?"}, {"start": 126.36000000000001, "end": 132.24, "text": " Well, it can not only learn these motions, weave together these motions like previous"}, {"start": 132.24, "end": 139.20000000000002, "text": " works, but hold on to your papers because it can now also come up with novel moves as"}, {"start": 139.2, "end": 140.2, "text": " well."}, {"start": 140.2, "end": 141.2, "text": " Wow!"}, {"start": 141.2, "end": 148.04, "text": " This includes new attacking sequences and combining already existing attacks with novel"}, {"start": 148.04, "end": 154.48, "text": " footwork patterns, and it does all this spectacularly well."}, {"start": 154.48, "end": 160.12, "text": " For instance, if we show it how to have it's guard up and how to throw a punch, what will"}, {"start": 160.12, "end": 162.12, "text": " it learn?"}, {"start": 162.12, "end": 167.95999999999998, "text": " Get this, it will keep it's guard up while throwing that punch."}, {"start": 167.96, "end": 173.92000000000002, "text": " And it not only does that in a realistic fluid movement pattern, but it also found out"}, {"start": 173.92000000000002, "end": 181.88, "text": " about something that has strategic value, same with evading an attack with some head movement"}, {"start": 181.88, "end": 185.96, "text": " and counter attacking."}, {"start": 185.96, "end": 189.20000000000002, "text": " Loving it."}, {"start": 189.20000000000002, "end": 191.8, "text": " But how easy is it to use this?"}, {"start": 191.8, "end": 197.24, "text": " Do we need to be an AI scientist to be able to invoke these amazing motions?"}, {"start": 197.24, "end": 204.24, "text": " Well, if you have been holding on to your papers so far, now squeeze that paper and look"}, {"start": 204.24, "end": 205.24, "text": " here."}, {"start": 205.24, "end": 206.56, "text": " Wow!"}, {"start": 206.56, "end": 210.72, "text": " You don't need to be an AI scientist to play with this, not at all."}, {"start": 210.72, "end": 216.60000000000002, "text": " All you need is a controller to invoke these beautiful motions, and all this runs in"}, {"start": 216.60000000000002, "end": 218.0, "text": " real time."}, {"start": 218.0, "end": 219.76000000000002, "text": " My goodness!"}, {"start": 219.76000000000002, "end": 224.72, "text": " For instance, you can crouch down and evade a potential attack by controlling the right"}, {"start": 224.72, "end": 228.76, "text": " stick and launch a punch in the meantime."}, {"start": 228.76, "end": 234.16, "text": " And remember, not only both halves have to make sense separately, the body motion has"}, {"start": 234.16, "end": 236.72, "text": " to make sense as a whole."}, {"start": 236.72, "end": 238.68, "text": " And it really does."}, {"start": 238.68, "end": 240.32, "text": " Look at that!"}, {"start": 240.32, "end": 246.64, "text": " And here comes the best part, you can even assemble your own signature attacks, for instance,"}, {"start": 246.64, "end": 252.96, "text": " perform that surprise spinning backfist, an amazing spin kick."}, {"start": 252.96, "end": 258.44, "text": " And yes, you can even go full karate kid with that crane kick."}, {"start": 258.44, "end": 266.64, "text": " And as a cherry on top, the characters can also react to the strikes, clinch, or even"}, {"start": 266.64, "end": 268.72, "text": " try a take down."}, {"start": 268.72, "end": 271.04, "text": " So with that, there we go."}, {"start": 271.04, "end": 276.72, "text": " We are, again, one step closer to having access to super realistic motion techniques for"}, {"start": 276.72, "end": 281.48, "text": " virtual characters, and all we need for this is a controller."}, {"start": 281.48, "end": 286.20000000000005, "text": " And remember, all this already runs in real time."}, {"start": 286.20000000000005, "end": 290.28000000000003, "text": " Another amazing Cigar of Paper from Sebastian Stalker."}, {"start": 290.28000000000003, "end": 297.32, "text": " And get this, he is currently a fourth year PhD student, and already made profound contributions"}, {"start": 297.32, "end": 299.0, "text": " to the industry."}, {"start": 299.0, "end": 302.32, "text": " Huge congratulations on this amazing achievement."}, {"start": 302.32, "end": 304.32, "text": " What a time to be alive!"}, {"start": 304.32, "end": 309.40000000000003, "text": " PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning"}, {"start": 309.4, "end": 311.71999999999997, "text": " as intuitive as possible."}, {"start": 311.71999999999997, "end": 316.64, "text": " This gives you a faster way to build out models with more transparency into how your model"}, {"start": 316.64, "end": 320.88, "text": " is architected, how it performs, and how to debug it."}, {"start": 320.88, "end": 325.56, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 325.56, "end": 330.64, "text": " It even generates visualizations for all the model variables, and gives you recommendations"}, {"start": 330.64, "end": 335.32, "text": " both during modeling and training, and does all this automatically."}, {"start": 335.32, "end": 339.88, "text": " I only wish I had a tool like this when I was working on my neural networks during my"}, {"start": 339.88, "end": 341.4, "text": " PhD years."}, {"start": 341.4, "end": 347.08, "text": " Visit perceptiLabs.com slash papers to easily install the free local version of their system"}, {"start": 347.08, "end": 348.08, "text": " today."}, {"start": 348.08, "end": 352.71999999999997, "text": " Our thanks to perceptiLabs for their support, and for helping us make better videos for"}, {"start": 352.71999999999997, "end": 353.71999999999997, "text": " you."}, {"start": 353.72, "end": 380.84000000000003, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ZDItmrqfxwI
This AI Stuntman Just Keeps Getting Better! 🏃
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 The paper "Learning a family of motor skills from a single motion clip" is available here: http://mrl.snu.ac.kr/research/ProjectParameterizedMotion/ParameterizedMotion.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to see how an AI can learn crazy stunts from just one video clip. And if even that's not enough, it can do even more. This agent is embedded in a physics simulation and first it looks at a piece of reference motion like this one. And then after looking, it can reproduce it. That is already pretty cool, but it doesn't stop there. I think you know what's coming? Yes, not only learning, but improving the original motion. Look, it can refine this motion a bit, and then a bit more, and then some more. And this just keeps on going until... Wait a second, hold on to your papers because this looks impossible. Are you trying to tell me that it's improved the move so much that it can jump through this? Yes, yes it does. Here is the first reproduction of the jump motion and the improved version side by side. Whoa! The difference speaks for itself. Absolutely amazing. We can also give it this reference clip to teach it to jump from one box to another. This isn't quite difficult. And now comes one of my favorites from the paper, and that is testing how much it can improve upon this technique. Let's give it a try. It also learned how to perform a shorter jump, a longer jump, and now, oh yes, the final boss. Wow, it could even pull off this super long jump. It seems that this superbot can do absolutely anything. Well, almost. And it can not only learn these amazing moves, but it can also weave them together so well that we can build a cool little playground and it gets through it with ease. Well, most of it anyway. So, at this point, I was wondering how general the knowledge is that it learns from these example clips. A good sign of an intelligent actor is that things can change a little and it can adapt to that. Now, it clearly can deal with the changing environment that is fantastic, but do you know what acid can deal with? And now, if you have been holding onto your papers, squeeze that paper because it can also deal with changing body proportions. Yes, really. We can put it in a different body and it will still work. This chap is cursed with this crazy configuration and can still pull off a cord wheel. If you haven't been exercising lately, what's your excuse now? We can also ask it to perform the same task with more or less energy or to even apply just a tiny bit of force for a punch or to go full mic Tyson on the opponent. So, how is all this wizardry possible? Well, one of the key contributions of this work is that the author's devised a method to search the space of motions efficiently. Since it does it in a continuous reinforcement learning environment, this is super challenging. At the risk of simplifying the solution, their method solves this by running both an exploration phase to find new ways of pulling off a move. And with blue, you see that when it found something that seems to work, it also keeps refining it. Similar endeavors are also referred to as the exploration exploitation problem and the authors proposed a really cool no way of handling it. Now, there are plenty more contributions in the paper, so make sure to have a look at it in the video description. Especially given that this is a fantastic paper and the presentation is second to none. I am sure that the authors could have worked half as much on this project and this paper would still have been accepted, but they still decided to put in that extra mile. And I am honored to be able to celebrate their amazing work together with UFLO scholars. And for now, an AI agent can look at a single clip of a motion and can not only perform it, but it can make it better, pull it off in different environments, and it can be even put in a different body and still do it well. What a time to be alive! This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But I am not looking for data, I am looking for insights. And weights and biases helps with exactly that. They have tools for experiment tracking, data set and model versioning, and even hyper-parameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.ME-SLASH-PAPER-INTROW or just click the link in the video description and try this 10-minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 12.280000000000001, "text": " Today we are going to see how an AI can learn crazy stunts from just one video clip."}, {"start": 12.280000000000001, "end": 16.96, "text": " And if even that's not enough, it can do even more."}, {"start": 16.96, "end": 25.96, "text": " This agent is embedded in a physics simulation and first it looks at a piece of reference motion like this one."}, {"start": 25.96, "end": 30.64, "text": " And then after looking, it can reproduce it."}, {"start": 30.64, "end": 34.88, "text": " That is already pretty cool, but it doesn't stop there."}, {"start": 34.88, "end": 37.480000000000004, "text": " I think you know what's coming?"}, {"start": 37.480000000000004, "end": 43.040000000000006, "text": " Yes, not only learning, but improving the original motion."}, {"start": 43.040000000000006, "end": 50.120000000000005, "text": " Look, it can refine this motion a bit, and then a bit more, and then some more."}, {"start": 50.120000000000005, "end": 53.480000000000004, "text": " And this just keeps on going until..."}, {"start": 53.48, "end": 59.48, "text": " Wait a second, hold on to your papers because this looks impossible."}, {"start": 59.48, "end": 67.56, "text": " Are you trying to tell me that it's improved the move so much that it can jump through this?"}, {"start": 67.56, "end": 70.56, "text": " Yes, yes it does."}, {"start": 70.56, "end": 77.75999999999999, "text": " Here is the first reproduction of the jump motion and the improved version side by side."}, {"start": 77.75999999999999, "end": 79.32, "text": " Whoa!"}, {"start": 79.32, "end": 82.84, "text": " The difference speaks for itself."}, {"start": 82.84, "end": 85.12, "text": " Absolutely amazing."}, {"start": 85.12, "end": 91.44, "text": " We can also give it this reference clip to teach it to jump from one box to another."}, {"start": 91.44, "end": 93.64, "text": " This isn't quite difficult."}, {"start": 93.64, "end": 102.48, "text": " And now comes one of my favorites from the paper, and that is testing how much it can improve upon this technique."}, {"start": 102.48, "end": 104.08000000000001, "text": " Let's give it a try."}, {"start": 104.08000000000001, "end": 112.28, "text": " It also learned how to perform a shorter jump, a longer jump,"}, {"start": 112.28, "end": 117.28, "text": " and now, oh yes, the final boss."}, {"start": 117.28, "end": 121.56, "text": " Wow, it could even pull off this super long jump."}, {"start": 121.56, "end": 127.28, "text": " It seems that this superbot can do absolutely anything."}, {"start": 127.28, "end": 129.56, "text": " Well, almost."}, {"start": 129.56, "end": 136.64, "text": " And it can not only learn these amazing moves, but it can also weave them together so well"}, {"start": 136.64, "end": 144.79999999999998, "text": " that we can build a cool little playground and it gets through it with ease."}, {"start": 144.79999999999998, "end": 147.6, "text": " Well, most of it anyway."}, {"start": 147.6, "end": 154.88, "text": " So, at this point, I was wondering how general the knowledge is that it learns from these example clips."}, {"start": 154.88, "end": 162.39999999999998, "text": " A good sign of an intelligent actor is that things can change a little and it can adapt to that."}, {"start": 162.4, "end": 168.16, "text": " Now, it clearly can deal with the changing environment that is fantastic,"}, {"start": 168.16, "end": 171.28, "text": " but do you know what acid can deal with?"}, {"start": 171.28, "end": 176.0, "text": " And now, if you have been holding onto your papers, squeeze that paper"}, {"start": 176.0, "end": 180.52, "text": " because it can also deal with changing body proportions."}, {"start": 180.52, "end": 182.4, "text": " Yes, really."}, {"start": 182.4, "end": 186.68, "text": " We can put it in a different body and it will still work."}, {"start": 186.68, "end": 197.68, "text": " This chap is cursed with this crazy configuration and can still pull off a cord wheel."}, {"start": 197.68, "end": 201.8, "text": " If you haven't been exercising lately, what's your excuse now?"}, {"start": 201.8, "end": 211.96, "text": " We can also ask it to perform the same task with more or less energy"}, {"start": 211.96, "end": 221.8, "text": " or to even apply just a tiny bit of force for a punch or to go full mic Tyson on the opponent."}, {"start": 221.8, "end": 225.56, "text": " So, how is all this wizardry possible?"}, {"start": 225.56, "end": 230.60000000000002, "text": " Well, one of the key contributions of this work is that the author's devised a method"}, {"start": 230.60000000000002, "end": 234.12, "text": " to search the space of motions efficiently."}, {"start": 234.12, "end": 238.20000000000002, "text": " Since it does it in a continuous reinforcement learning environment,"}, {"start": 238.20000000000002, "end": 240.44, "text": " this is super challenging."}, {"start": 240.44, "end": 247.64, "text": " At the risk of simplifying the solution, their method solves this by running both an exploration phase"}, {"start": 247.64, "end": 251.16, "text": " to find new ways of pulling off a move."}, {"start": 251.16, "end": 256.44, "text": " And with blue, you see that when it found something that seems to work,"}, {"start": 256.44, "end": 259.08, "text": " it also keeps refining it."}, {"start": 259.08, "end": 264.28, "text": " Similar endeavors are also referred to as the exploration exploitation problem"}, {"start": 264.28, "end": 268.84, "text": " and the authors proposed a really cool no way of handling it."}, {"start": 268.84, "end": 271.88, "text": " Now, there are plenty more contributions in the paper,"}, {"start": 271.88, "end": 275.23999999999995, "text": " so make sure to have a look at it in the video description."}, {"start": 275.23999999999995, "end": 281.96, "text": " Especially given that this is a fantastic paper and the presentation is second to none."}, {"start": 281.96, "end": 286.67999999999995, "text": " I am sure that the authors could have worked half as much on this project"}, {"start": 286.67999999999995, "end": 289.32, "text": " and this paper would still have been accepted,"}, {"start": 289.32, "end": 292.91999999999996, "text": " but they still decided to put in that extra mile."}, {"start": 292.91999999999996, "end": 297.4, "text": " And I am honored to be able to celebrate their amazing work together"}, {"start": 297.4, "end": 299.23999999999995, "text": " with UFLO scholars."}, {"start": 299.23999999999995, "end": 304.52, "text": " And for now, an AI agent can look at a single clip of a motion"}, {"start": 304.52, "end": 309.4, "text": " and can not only perform it, but it can make it better,"}, {"start": 309.4, "end": 311.64, "text": " pull it off in different environments,"}, {"start": 311.64, "end": 317.08, "text": " and it can be even put in a different body and still do it well."}, {"start": 317.08, "end": 318.91999999999996, "text": " What a time to be alive!"}, {"start": 318.91999999999996, "end": 322.52, "text": " This video has been supported by weights and biases."}, {"start": 322.52, "end": 327.0, "text": " Being a machine learning researcher means doing tons of experiments"}, {"start": 327.0, "end": 330.28, "text": " and, of course, creating tons of data."}, {"start": 330.28, "end": 334.6, "text": " But I am not looking for data, I am looking for insights."}, {"start": 334.6, "end": 337.96, "text": " And weights and biases helps with exactly that."}, {"start": 337.96, "end": 340.28, "text": " They have tools for experiment tracking,"}, {"start": 340.28, "end": 345.88, "text": " data set and model versioning, and even hyper-parameter optimization."}, {"start": 345.88, "end": 350.36, "text": " No wonder this is the experiment tracking tool choice of open AI,"}, {"start": 350.36, "end": 354.92, "text": " Toyota Research, Samsung, and many more prestigious labs."}, {"start": 354.92, "end": 360.68, "text": " Make sure to use the link WNB.ME-SLASH-PAPER-INTROW"}, {"start": 360.68, "end": 363.32, "text": " or just click the link in the video description"}, {"start": 363.32, "end": 367.56, "text": " and try this 10-minute example of weights and biases today"}, {"start": 367.56, "end": 371.8, "text": " to experience the wonderful feeling of training a neural network"}, {"start": 371.8, "end": 375.16, "text": " and being in control of your experiments."}, {"start": 375.16, "end": 377.88, "text": " After you try it, you won't want to go back."}, {"start": 377.88, "end": 380.12, "text": " Thanks for watching and for your generous support,"}, {"start": 380.12, "end": 388.36, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ogL-2IClOug
NVIDIA’s New Technique: Beautiful Models For Less! 🌲
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Appearance-Driven Automatic 3D Model Simplification" is available here: https://research.nvidia.com/publication/2021-04_Appearance-Driven-Automatic-3D 📝 The differentiable material synthesis paper is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-learning-and-synthesis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1225988/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia #gamedev
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir. Today we are going to see how crazy good MVD as new system is at simplifying virtual objects. These objects are used to create photorealistic footage for feature length movies, virtual worlds and more. But here comes the problem. Sometimes these geometries are so detailed, they are prohibitively expensive to store and render efficiently. Here are some examples from one of our papers that were quite challenging to iterate on and render. This took several minutes to render and always ate all the memory in my computer. So, what can we do if we would still like to get crisp, high quality geometry, but cheaper and quicker? I'll show you in a moment. This is part of a super complex scene. Get this. It is so complex that it takes nearly 100GB of storage space to render just one image of this and is typically used for benchmarking rendering algorithms. This is the nerve-wrong of light transport algorithms if you will. Well, hold on to your papers because I said that I'll show you in a moment what we can do to get all this at a more affordable cost, but in fact, you are looking at the results of the new method right now. Yes, parts of this image are the original geometry and other parts have already been simplified. So, which is which? Do you see the difference? Please stop the video and let me know in the comments below. I'll wait. Thank you. So, let's see together. Yes, this is the original geometry that requires over 5 billion triangles. And this is the simplified one, which... What? Can this really be? This uses less than 1% of the number of triangles compared to this. In fact, it's less than half a percent. That is insanity. This really means that about every 200 triangles are replaced with just one triangle and it still looks mostly the same. That sounds flat out impossible to me. Wow! So, how does this switchcraft even work? Well, now you see this is the power of differentiable rendering. The problem formulation is as follows. We tell the algorithm that here are the results that you need to get, find the geometry and material properties that were resolved in this. It runs all this by means of optimization, which means that it will have a really crude initial guess that doesn't even seem to resemble the target geometry. But then, over time, it starts refining it and it gets closer and closer to the reference. This process is truly a sight to behold. Look at how beautifully it is approximating the target geometry. This looks very close and is much cheaper to store and render. I loved this example too. Previously, this differentiable rendering concept has been used to be able to take a photograph and find a photorealistic material model that we can put in our simulation program that matches it. This work did very well with materials, but it did not capture the geometry. This other work did something similar to this new paper, which means that it jointly found the geometry and material properties. But, as you see, high-frequency details were not as good as with this one. You see here, these details are gone. And now, just two years and one paper later, we can get a piece of geometry that is so detailed that it needs billions of triangles and it can be simplified 200 to 1. Now, if even that is not enough, admittedly, it is still a little rudimentary, but it even works for animated characters. I wonder where we will be two more papers down the line from here. And for now, wow! Scientists at Nvidia knocked it out of the park with this one. Huge congratulations to the team! What a time to be alive! So, there you go! This was quite a ride and I hope you enjoyed it at least half as much as I did. And if you enjoyed it at least as much as I did, and are thinking that this light transport thing is pretty cool and you would like to learn more about it, I held a master-level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education. No, no, the teachings should be available for everyone. Free education for everyone, that's what I want. So, the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full light simulation program from scratch there and learn about physics, the world around us, and more. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehir."}, {"start": 5.0, "end": 13.0, "text": " Today we are going to see how crazy good MVD as new system is at simplifying virtual objects."}, {"start": 13.0, "end": 21.0, "text": " These objects are used to create photorealistic footage for feature length movies, virtual worlds and more."}, {"start": 21.0, "end": 24.0, "text": " But here comes the problem."}, {"start": 24.0, "end": 32.0, "text": " Sometimes these geometries are so detailed, they are prohibitively expensive to store and render efficiently."}, {"start": 32.0, "end": 39.0, "text": " Here are some examples from one of our papers that were quite challenging to iterate on and render."}, {"start": 39.0, "end": 45.0, "text": " This took several minutes to render and always ate all the memory in my computer."}, {"start": 45.0, "end": 54.0, "text": " So, what can we do if we would still like to get crisp, high quality geometry, but cheaper and quicker?"}, {"start": 54.0, "end": 56.0, "text": " I'll show you in a moment."}, {"start": 56.0, "end": 59.0, "text": " This is part of a super complex scene."}, {"start": 59.0, "end": 72.0, "text": " Get this. It is so complex that it takes nearly 100GB of storage space to render just one image of this and is typically used for benchmarking rendering algorithms."}, {"start": 72.0, "end": 77.0, "text": " This is the nerve-wrong of light transport algorithms if you will."}, {"start": 77.0, "end": 91.0, "text": " Well, hold on to your papers because I said that I'll show you in a moment what we can do to get all this at a more affordable cost, but in fact, you are looking at the results of the new method right now."}, {"start": 91.0, "end": 98.0, "text": " Yes, parts of this image are the original geometry and other parts have already been simplified."}, {"start": 98.0, "end": 106.0, "text": " So, which is which? Do you see the difference? Please stop the video and let me know in the comments below."}, {"start": 106.0, "end": 109.0, "text": " I'll wait. Thank you."}, {"start": 109.0, "end": 113.0, "text": " So, let's see together."}, {"start": 113.0, "end": 120.0, "text": " Yes, this is the original geometry that requires over 5 billion triangles."}, {"start": 120.0, "end": 133.0, "text": " And this is the simplified one, which... What? Can this really be? This uses less than 1% of the number of triangles compared to this."}, {"start": 133.0, "end": 139.0, "text": " In fact, it's less than half a percent. That is insanity."}, {"start": 139.0, "end": 148.0, "text": " This really means that about every 200 triangles are replaced with just one triangle and it still looks mostly the same."}, {"start": 148.0, "end": 153.0, "text": " That sounds flat out impossible to me. Wow!"}, {"start": 153.0, "end": 162.0, "text": " So, how does this switchcraft even work? Well, now you see this is the power of differentiable rendering."}, {"start": 162.0, "end": 175.0, "text": " The problem formulation is as follows. We tell the algorithm that here are the results that you need to get, find the geometry and material properties that were resolved in this."}, {"start": 175.0, "end": 186.0, "text": " It runs all this by means of optimization, which means that it will have a really crude initial guess that doesn't even seem to resemble the target geometry."}, {"start": 186.0, "end": 193.0, "text": " But then, over time, it starts refining it and it gets closer and closer to the reference."}, {"start": 193.0, "end": 201.0, "text": " This process is truly a sight to behold. Look at how beautifully it is approximating the target geometry."}, {"start": 201.0, "end": 207.0, "text": " This looks very close and is much cheaper to store and render."}, {"start": 207.0, "end": 211.0, "text": " I loved this example too."}, {"start": 211.0, "end": 224.0, "text": " Previously, this differentiable rendering concept has been used to be able to take a photograph and find a photorealistic material model that we can put in our simulation program that matches it."}, {"start": 224.0, "end": 240.0, "text": " This work did very well with materials, but it did not capture the geometry. This other work did something similar to this new paper, which means that it jointly found the geometry and material properties."}, {"start": 240.0, "end": 249.0, "text": " But, as you see, high-frequency details were not as good as with this one. You see here, these details are gone."}, {"start": 249.0, "end": 264.0, "text": " And now, just two years and one paper later, we can get a piece of geometry that is so detailed that it needs billions of triangles and it can be simplified 200 to 1."}, {"start": 264.0, "end": 274.0, "text": " Now, if even that is not enough, admittedly, it is still a little rudimentary, but it even works for animated characters."}, {"start": 274.0, "end": 279.0, "text": " I wonder where we will be two more papers down the line from here."}, {"start": 279.0, "end": 287.0, "text": " And for now, wow! Scientists at Nvidia knocked it out of the park with this one. Huge congratulations to the team!"}, {"start": 287.0, "end": 297.0, "text": " What a time to be alive! So, there you go! This was quite a ride and I hope you enjoyed it at least half as much as I did."}, {"start": 297.0, "end": 311.0, "text": " And if you enjoyed it at least as much as I did, and are thinking that this light transport thing is pretty cool and you would like to learn more about it, I held a master-level course on this topic at the Technical University of Vienna."}, {"start": 311.0, "end": 322.0, "text": " Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education."}, {"start": 322.0, "end": 330.0, "text": " No, no, the teachings should be available for everyone. Free education for everyone, that's what I want."}, {"start": 330.0, "end": 339.0, "text": " So, the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started."}, {"start": 339.0, "end": 346.0, "text": " We write a full light simulation program from scratch there and learn about physics, the world around us, and more."}, {"start": 346.0, "end": 356.0, "text": " This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 356.0, "end": 370.0, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 370.0, "end": 384.0, "text": " Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers."}, {"start": 384.0, "end": 391.0, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 391.0, "end": 401.0, "text": " Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=CfJ074h9K8s
The Tale Of The Unscrewable Bolt! 🔩
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/mathisfederico/wandb_features/reports/Visualizing-Confusion-Matrices-With-W-B--VmlldzoxMzE5ODk 📝 The paper "Intersection-free Rigid Body Dynamics" is available here: https://ipc-sim.github.io/rigid-ipc/ Scene credits: - Bolt - YSoft be3D - Expanding Lock Box - Angus Deveson - Bike Chain and Sprocket - Okan (bike chain), Hampus Andersson (sprocket) 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image source: https://pixabay.com/images/id-1924173/ Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejona Ifaher. I have to say we haven't had a simulation paper in a while, so today's episode is going to be my way of medicating myself. You are more than welcome to watch the process. And this paper is about performing collision detection. You see, when we write a simple space game, detecting whether a collision has happened or not is mostly a trivial endeavor. However, now, instead, let's look at the kind of simulation complexity that you are expecting from a two-minute papers video. First, let's try to screw this bolt in using an industry standard simulation system. And it is......stock. Hmm... Why? Because here, we would need to simulate in detail not only whether two things collide, they collide all the time, but we need to check for and simulate friction too. Let's see what this new simulation method does with the same scene. And... Oh, yes! This one isn't screwing with us and does the job perfectly. Excellent! However, this was not nearly the most complex thing it can do. Let's try some crazy geometry with crazy movements and tons of friction. There we go. This one will do. Welcome to the expanding log box experiment. So, what is this? Look, as we turn the key, the locking pins retract, and the bottom is now allowed to fall. This scene contains tens to hundreds of thousands of contacts, and yet it still works perfectly. Beautiful! I love this one because with this simulation, we can test intricate mechanisms for robotics and more before committing to manufacturing anything. And, unlike with previous methods, we don't need to worry whether the simulation is correct or not, and we can be sure that if we 3D print this, it will behave exactly this way. So good! Also, here come some of my favorite experiments from the paper. For instance, it can also simulate a piston attached to a rotating disk, smooth motion on one wheel leading to intermittent motion on the other one. And, if you feel the urge to build a virtual bike, don't worry for a second, because your chain and sprocket mechanisms will work exactly as you expect to. Loving it! Now, interestingly, look here. The time-step size used with the new technique is a hundred times bigger, which is great. We can advance the time in bigger pieces when computing the simulation. That is good news indeed. However, every time we do so, we still have to compute a great deal more. The resulting computation time is still at least a hundred times slower than previous methods. However, those methods don't count, at least not on these scenes, because they have produced incorrect results. Look at it some other way, this is the fastest simulator that actually works. Still, it is not that slow. The one with the intermittent motion takes less than a second per time-step, which likely means a few seconds per frame, while the bolt-screwing scene is likely in the minutes per frame domain. Very impressive! And, if you're a seasoned fellow scholar, you know what's coming. This is where we invoke the first law of papers, which says that research is a process. Do not look at where we are, look at where we will be two more papers down the line. And, two more papers down the line, I am sure even the more complex simulations will be done in a matter of seconds. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their tool to visualize confusion matrices and find out where your neural network made mistakes, and what exactly those mistakes were. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open-source projects. It really is as good as it gets. If you are going to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Ejona Ifaher."}, {"start": 5.0, "end": 14.0, "text": " I have to say we haven't had a simulation paper in a while, so today's episode is going to be my way of medicating myself."}, {"start": 14.0, "end": 17.0, "text": " You are more than welcome to watch the process."}, {"start": 17.0, "end": 21.0, "text": " And this paper is about performing collision detection."}, {"start": 21.0, "end": 28.0, "text": " You see, when we write a simple space game, detecting whether a collision has happened or not"}, {"start": 28.0, "end": 31.0, "text": " is mostly a trivial endeavor."}, {"start": 31.0, "end": 40.0, "text": " However, now, instead, let's look at the kind of simulation complexity that you are expecting from a two-minute papers video."}, {"start": 40.0, "end": 47.0, "text": " First, let's try to screw this bolt in using an industry standard simulation system."}, {"start": 47.0, "end": 49.0, "text": " And it is..."}, {"start": 49.0, "end": 51.0, "text": "...stock."}, {"start": 51.0, "end": 52.0, "text": " Hmm..."}, {"start": 52.0, "end": 53.0, "text": " Why?"}, {"start": 53.0, "end": 62.0, "text": " Because here, we would need to simulate in detail not only whether two things collide, they collide all the time,"}, {"start": 62.0, "end": 67.0, "text": " but we need to check for and simulate friction too."}, {"start": 67.0, "end": 71.0, "text": " Let's see what this new simulation method does with the same scene."}, {"start": 71.0, "end": 73.0, "text": " And..."}, {"start": 73.0, "end": 74.0, "text": " Oh, yes!"}, {"start": 74.0, "end": 78.0, "text": " This one isn't screwing with us and does the job perfectly."}, {"start": 78.0, "end": 79.0, "text": " Excellent!"}, {"start": 79.0, "end": 84.0, "text": " However, this was not nearly the most complex thing it can do."}, {"start": 84.0, "end": 90.0, "text": " Let's try some crazy geometry with crazy movements and tons of friction."}, {"start": 90.0, "end": 92.0, "text": " There we go."}, {"start": 92.0, "end": 94.0, "text": " This one will do."}, {"start": 94.0, "end": 97.0, "text": " Welcome to the expanding log box experiment."}, {"start": 97.0, "end": 99.0, "text": " So, what is this?"}, {"start": 99.0, "end": 107.0, "text": " Look, as we turn the key, the locking pins retract, and the bottom is now allowed to fall."}, {"start": 107.0, "end": 115.0, "text": " This scene contains tens to hundreds of thousands of contacts, and yet it still works perfectly."}, {"start": 115.0, "end": 117.0, "text": " Beautiful!"}, {"start": 117.0, "end": 127.0, "text": " I love this one because with this simulation, we can test intricate mechanisms for robotics and more before committing to manufacturing anything."}, {"start": 127.0, "end": 133.0, "text": " And, unlike with previous methods, we don't need to worry whether the simulation is correct or not,"}, {"start": 133.0, "end": 140.0, "text": " and we can be sure that if we 3D print this, it will behave exactly this way."}, {"start": 140.0, "end": 142.0, "text": " So good!"}, {"start": 142.0, "end": 147.0, "text": " Also, here come some of my favorite experiments from the paper."}, {"start": 147.0, "end": 153.0, "text": " For instance, it can also simulate a piston attached to a rotating disk,"}, {"start": 153.0, "end": 160.0, "text": " smooth motion on one wheel leading to intermittent motion on the other one."}, {"start": 160.0, "end": 167.0, "text": " And, if you feel the urge to build a virtual bike, don't worry for a second,"}, {"start": 167.0, "end": 172.0, "text": " because your chain and sprocket mechanisms will work exactly as you expect to."}, {"start": 172.0, "end": 174.0, "text": " Loving it!"}, {"start": 174.0, "end": 177.0, "text": " Now, interestingly, look here."}, {"start": 177.0, "end": 183.0, "text": " The time-step size used with the new technique is a hundred times bigger, which is great."}, {"start": 183.0, "end": 188.0, "text": " We can advance the time in bigger pieces when computing the simulation."}, {"start": 188.0, "end": 191.0, "text": " That is good news indeed."}, {"start": 191.0, "end": 196.0, "text": " However, every time we do so, we still have to compute a great deal more."}, {"start": 196.0, "end": 203.0, "text": " The resulting computation time is still at least a hundred times slower than previous methods."}, {"start": 203.0, "end": 211.0, "text": " However, those methods don't count, at least not on these scenes, because they have produced incorrect results."}, {"start": 211.0, "end": 216.0, "text": " Look at it some other way, this is the fastest simulator that actually works."}, {"start": 216.0, "end": 219.0, "text": " Still, it is not that slow."}, {"start": 219.0, "end": 224.0, "text": " The one with the intermittent motion takes less than a second per time-step,"}, {"start": 224.0, "end": 231.0, "text": " which likely means a few seconds per frame, while the bolt-screwing scene is likely in the minutes per frame domain."}, {"start": 231.0, "end": 233.0, "text": " Very impressive!"}, {"start": 233.0, "end": 237.0, "text": " And, if you're a seasoned fellow scholar, you know what's coming."}, {"start": 237.0, "end": 243.0, "text": " This is where we invoke the first law of papers, which says that research is a process."}, {"start": 243.0, "end": 249.0, "text": " Do not look at where we are, look at where we will be two more papers down the line."}, {"start": 249.0, "end": 257.0, "text": " And, two more papers down the line, I am sure even the more complex simulations will be done in a matter of seconds."}, {"start": 257.0, "end": 260.0, "text": " What a time to be alive!"}, {"start": 260.0, "end": 263.0, "text": " This episode has been supported by weights and biases."}, {"start": 263.0, "end": 271.0, "text": " In this post, they show you how to use their tool to visualize confusion matrices and find out where your neural network made mistakes,"}, {"start": 271.0, "end": 274.0, "text": " and what exactly those mistakes were."}, {"start": 274.0, "end": 278.0, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 278.0, "end": 289.0, "text": " Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 289.0, "end": 296.0, "text": " And the best part is that weights and biases is free for all individuals, academics, and open-source projects."}, {"start": 296.0, "end": 299.0, "text": " It really is as good as it gets."}, {"start": 299.0, "end": 308.0, "text": " If you are going to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today."}, {"start": 308.0, "end": 313.0, "text": " Our thanks to weights and biases for their long-standing support and for helping us make better videos for you."}, {"start": 313.0, "end": 339.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=rawsSOLNYE0
This AI Makes Digital Copies of Humans! 👤
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd  📝 The paper "The Relightables: Volumetric Performance Capture of Humans with Realistic Relighting" is available here: https://augmentedperception.github.io/therelightables/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #vr
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zona Ifehir. Today we are going to look at a paper with two twists. You know what? I'll give you a twist number one right away. This human isn't here. This human isn't here either. And neither is this human here. Now you're probably asking Karo what are you even talking about? Now hold on to your papers because I am talking about this. Look, this is the geometry of this virtual human inside a virtual world. Whoa! Yes, all of these people are in a synthetic video and in a virtual environment that can be changed with a simple click. And more importantly, as we change the lighting or the environment, it also simulates the effect of that environment on the character making it look like they are really there. So, that sounds good, but how do we take a human and make a digital copy of them? Well, first we place them in a capture system that contains hundreds of LED lights and an elaborate sensor for capturing depth information. Why do we need these? Well, all this gives the system plenty of data on how the skin, hair and the clothes reflect light. And, at this point, we know everything we need to know and can now proceed and place our virtual copy in a computer game or even a telepresence meeting. Now, this is already amazing, but two things really stick out here. One, you will see when you look at this previous competing work. This had a really smooth output geometry, which means that only few high frequency details were retained. This other work was better at retaining the details, but look, tons of artifacts appear when the model is moving. And, what does the new one look like? Is it any better? Let's have a look. Oh my, we get tons of fine details and the movements have improved significantly. Not perfect by any means, look here, but still, this is an amazing leap forward. Two, the other remarkable thing here is that the results are so realistic that objects in the virtual scene can cast a shadow on our model. What a time to be alive. And now, yes, you remember that I promised two twists. So, where is twist number two? Well, it has been here all along for the entirety of this video. Have you noticed? Look, all this is from 2019, from two years ago. Two years is a long time in machine learning and computer graphics research, and I cannot wait to see how it will be improved two more papers down the line. If you are excited too, make sure to subscribe and hit the bell icon to not miss it when it appears. This video has been supported by weights and biases. They have an amazing podcast by the name, Gradient Descent, where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They have discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wmb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zona Ifehir."}, {"start": 5.0, "end": 10.0, "text": " Today we are going to look at a paper with two twists."}, {"start": 10.0, "end": 14.0, "text": " You know what? I'll give you a twist number one right away."}, {"start": 14.0, "end": 22.0, "text": " This human isn't here. This human isn't here either."}, {"start": 22.0, "end": 25.0, "text": " And neither is this human here."}, {"start": 25.0, "end": 31.0, "text": " Now you're probably asking Karo what are you even talking about?"}, {"start": 31.0, "end": 37.0, "text": " Now hold on to your papers because I am talking about this."}, {"start": 37.0, "end": 44.0, "text": " Look, this is the geometry of this virtual human inside a virtual world."}, {"start": 44.0, "end": 46.0, "text": " Whoa!"}, {"start": 46.0, "end": 55.0, "text": " Yes, all of these people are in a synthetic video and in a virtual environment that can be changed with a simple click."}, {"start": 55.0, "end": 68.0, "text": " And more importantly, as we change the lighting or the environment, it also simulates the effect of that environment on the character making it look like they are really there."}, {"start": 68.0, "end": 75.0, "text": " So, that sounds good, but how do we take a human and make a digital copy of them?"}, {"start": 75.0, "end": 86.0, "text": " Well, first we place them in a capture system that contains hundreds of LED lights and an elaborate sensor for capturing depth information."}, {"start": 86.0, "end": 88.0, "text": " Why do we need these?"}, {"start": 88.0, "end": 97.0, "text": " Well, all this gives the system plenty of data on how the skin, hair and the clothes reflect light."}, {"start": 97.0, "end": 109.0, "text": " And, at this point, we know everything we need to know and can now proceed and place our virtual copy in a computer game or even a telepresence meeting."}, {"start": 109.0, "end": 116.0, "text": " Now, this is already amazing, but two things really stick out here."}, {"start": 116.0, "end": 120.0, "text": " One, you will see when you look at this previous competing work."}, {"start": 120.0, "end": 127.0, "text": " This had a really smooth output geometry, which means that only few high frequency details were retained."}, {"start": 127.0, "end": 137.0, "text": " This other work was better at retaining the details, but look, tons of artifacts appear when the model is moving."}, {"start": 137.0, "end": 144.0, "text": " And, what does the new one look like? Is it any better? Let's have a look."}, {"start": 144.0, "end": 151.0, "text": " Oh my, we get tons of fine details and the movements have improved significantly."}, {"start": 151.0, "end": 158.0, "text": " Not perfect by any means, look here, but still, this is an amazing leap forward."}, {"start": 158.0, "end": 168.0, "text": " Two, the other remarkable thing here is that the results are so realistic that objects in the virtual scene can cast a shadow on our model."}, {"start": 168.0, "end": 175.0, "text": " What a time to be alive. And now, yes, you remember that I promised two twists."}, {"start": 175.0, "end": 185.0, "text": " So, where is twist number two? Well, it has been here all along for the entirety of this video. Have you noticed?"}, {"start": 185.0, "end": 190.0, "text": " Look, all this is from 2019, from two years ago."}, {"start": 190.0, "end": 200.0, "text": " Two years is a long time in machine learning and computer graphics research, and I cannot wait to see how it will be improved two more papers down the line."}, {"start": 200.0, "end": 207.0, "text": " If you are excited too, make sure to subscribe and hit the bell icon to not miss it when it appears."}, {"start": 207.0, "end": 219.0, "text": " This video has been supported by weights and biases. They have an amazing podcast by the name, Gradient Descent, where they interview machine learning experts who discuss how they use"}, {"start": 219.0, "end": 230.0, "text": " learning based algorithms to solve real world problems. They have discussed biology, teaching robots, machine learning in outer space, and a whole lot more."}, {"start": 230.0, "end": 240.0, "text": " Perfect for a fellow scholar with an open mind. Make sure to visit them through wmb.me slash gd or just click the link in the video description."}, {"start": 240.0, "end": 251.0, "text": " Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=HnkVoOdTiSo
Can An AI Design A Good Game Level? 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/ayush-thakur/interpretability/reports/Interpretability-in-Deep-Learning-With-W-B-CAM-and-GradCAM--Vmlldzo5MTIyNw 📝 The paper "Adversarial Reinforcement Learning for Procedural Content Generation" is available here: https://www.ea.com/seed/news/cog2021-adversarial-rl-content-generation 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir, testing modern computer games by using an AI is getting more and more popular these days. This earlier work showcased how we can use an automated agent to test the integrity of the game by finding spots where we can get stuck. And when we fixed the problem, we could easily ask the agent to check whether the fix really worked. In this case, it did. And this new work also uses learning algorithms to test our levels. Now this chap has been trained on a fixed level, mastered it, and let's see if it has managed to obtain general knowledge from it. How? Well, by testing how it performs on a different level. It is very confident, good, but... oh oh, as you see, it is confidently incorrect. So, is it possible to train an agent to be able to beat these levels more reliably? Well, how about creating a more elaborate curriculum for them to learn on? Yes, let's do that. But with a twist. In this work, the authors chose not to feed the AI a fixed set of levels. No, no, they created another AI that builds the levels for the player AI. So, both the builder and the player are learning algorithms who are tasked to succeed together in getting the agent to the finish line. They have to collaborate to succeed. Building the level means choosing the appropriate distance, height, angle, and size for these blocks. Let's see them playing together on an easy level. Okay, so far so good. But let's not let them build a little cartel where only easy levels are being generated. So, they get a higher score. I want to see a challenge. To do that, let's force the builder AI to use a larger average distance between the blocks and thereby creating levels of a prescribed difficulty. And with that, let's ramp up the difficulty a little. Things get a little more interesting here because... whoa! Do you see what I see here? Look! It even found a shortcut to the end of the level. And let's see the harder levels together. While many of these chaps failed, some of them are still able to succeed. A very cool. Let's compare the performance of the new technique with the previous FixTrack agent. This is the chap that learned by mastering only a FixTrack. And this one learned in the wilderness. Neither of them have seen these levels before. So, who is going to be scraperer? Of course, the wilderness guy described in the new technique. Excellent! So, all this sounds great, but I hear you asking the key question here. What do we use this for? Well, one, the player AI can test the levels that we are building for our game and give us feedback on whether it is possible to finish, is it too hard or too easy, and more. This can be a godsend when updating some levels because the agent will almost immediately tell us whether it has gotten easier or harder or if we have broken the level. No human testing is required. Now, hold on to your papers because the thing runs so quickly that we can even refine a level in a real time. Loving it. Or, too, the builder can also be given to a human player who might enjoy a level being built in real time in front of them. And here comes the best part. The whole concept generalizes well for other kinds of games, too. Look, the builder can build racetracks and the player can try to drive through them. So, do these great results also generalize to the racing game. Let's see what the numbers say. The agent that trained on a fixed track can succeed on an easy level, about 75% of the time, while the newly proposed agent can do it nearly with a 100% chance. A bit of an improvement. OK, now look at this. The fixed track agent can only beat a hard level, about 2 times out of 10, while the new agent can do it about 6 times out of 10. That is quite a bit of an improvement. Now, note that in a research paper, choosing a proper baseline to compare to is always a crucial question. I would like to note that the baseline here is not the state of the art, and with that it is a little easier to make the new solution pop. No matter the solutions are still good, but I think this is worth a note. So, from now on, whenever we create a new level in a computer game, we can have hundreds of competent AI players testing it in real time. So good. What a time to be alive! This episode has been supported by Wades and Biasis. In this post, they show you how to use their tool to interpret the results of your neural networks. For instance, they tell you how to look if your neural network has even looked at the dog in an image before classifying it to be a dog. If you work with learning algorithms on a regular basis, make sure to check out Wades and Biasis. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnbe.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to Wades and Biasis for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 11.200000000000001, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir, testing modern computer games by using an AI is getting more and more popular these days."}, {"start": 11.200000000000001, "end": 21.400000000000002, "text": " This earlier work showcased how we can use an automated agent to test the integrity of the game by finding spots where we can get stuck."}, {"start": 21.400000000000002, "end": 28.400000000000002, "text": " And when we fixed the problem, we could easily ask the agent to check whether the fix really worked."}, {"start": 28.4, "end": 31.4, "text": " In this case, it did."}, {"start": 33.4, "end": 38.4, "text": " And this new work also uses learning algorithms to test our levels."}, {"start": 38.4, "end": 47.4, "text": " Now this chap has been trained on a fixed level, mastered it, and let's see if it has managed to obtain general knowledge from it."}, {"start": 47.4, "end": 53.4, "text": " How? Well, by testing how it performs on a different level."}, {"start": 53.4, "end": 62.4, "text": " It is very confident, good, but... oh oh, as you see, it is confidently incorrect."}, {"start": 62.4, "end": 69.4, "text": " So, is it possible to train an agent to be able to beat these levels more reliably?"}, {"start": 69.4, "end": 74.4, "text": " Well, how about creating a more elaborate curriculum for them to learn on?"}, {"start": 74.4, "end": 84.4, "text": " Yes, let's do that. But with a twist. In this work, the authors chose not to feed the AI a fixed set of levels."}, {"start": 84.4, "end": 91.4, "text": " No, no, they created another AI that builds the levels for the player AI."}, {"start": 91.4, "end": 101.4, "text": " So, both the builder and the player are learning algorithms who are tasked to succeed together in getting the agent to the finish line."}, {"start": 101.4, "end": 111.4, "text": " They have to collaborate to succeed. Building the level means choosing the appropriate distance, height, angle, and size for these blocks."}, {"start": 111.4, "end": 115.4, "text": " Let's see them playing together on an easy level."}, {"start": 115.4, "end": 124.4, "text": " Okay, so far so good. But let's not let them build a little cartel where only easy levels are being generated."}, {"start": 124.4, "end": 139.4, "text": " So, they get a higher score. I want to see a challenge. To do that, let's force the builder AI to use a larger average distance between the blocks and thereby creating levels of a prescribed difficulty."}, {"start": 139.4, "end": 143.4, "text": " And with that, let's ramp up the difficulty a little."}, {"start": 143.4, "end": 148.4, "text": " Things get a little more interesting here because... whoa!"}, {"start": 148.4, "end": 158.4, "text": " Do you see what I see here? Look! It even found a shortcut to the end of the level."}, {"start": 158.4, "end": 163.4, "text": " And let's see the harder levels together."}, {"start": 163.4, "end": 171.4, "text": " While many of these chaps failed, some of them are still able to succeed. A very cool."}, {"start": 171.4, "end": 181.4, "text": " Let's compare the performance of the new technique with the previous FixTrack agent. This is the chap that learned by mastering only a FixTrack."}, {"start": 181.4, "end": 187.4, "text": " And this one learned in the wilderness. Neither of them have seen these levels before."}, {"start": 187.4, "end": 191.4, "text": " So, who is going to be scraperer?"}, {"start": 191.4, "end": 196.4, "text": " Of course, the wilderness guy described in the new technique. Excellent!"}, {"start": 196.4, "end": 203.4, "text": " So, all this sounds great, but I hear you asking the key question here. What do we use this for?"}, {"start": 203.4, "end": 213.4, "text": " Well, one, the player AI can test the levels that we are building for our game and give us feedback on whether it is possible to finish,"}, {"start": 213.4, "end": 217.4, "text": " is it too hard or too easy, and more."}, {"start": 217.4, "end": 229.4, "text": " This can be a godsend when updating some levels because the agent will almost immediately tell us whether it has gotten easier or harder or if we have broken the level."}, {"start": 229.4, "end": 244.4, "text": " No human testing is required. Now, hold on to your papers because the thing runs so quickly that we can even refine a level in a real time."}, {"start": 244.4, "end": 255.4, "text": " Loving it. Or, too, the builder can also be given to a human player who might enjoy a level being built in real time in front of them."}, {"start": 255.4, "end": 262.4, "text": " And here comes the best part. The whole concept generalizes well for other kinds of games, too."}, {"start": 262.4, "end": 270.4, "text": " Look, the builder can build racetracks and the player can try to drive through them."}, {"start": 270.4, "end": 277.4, "text": " So, do these great results also generalize to the racing game. Let's see what the numbers say."}, {"start": 277.4, "end": 290.4, "text": " The agent that trained on a fixed track can succeed on an easy level, about 75% of the time, while the newly proposed agent can do it nearly with a 100% chance."}, {"start": 290.4, "end": 305.4, "text": " A bit of an improvement. OK, now look at this. The fixed track agent can only beat a hard level, about 2 times out of 10, while the new agent can do it about 6 times out of 10."}, {"start": 305.4, "end": 308.4, "text": " That is quite a bit of an improvement."}, {"start": 308.4, "end": 315.4, "text": " Now, note that in a research paper, choosing a proper baseline to compare to is always a crucial question."}, {"start": 315.4, "end": 324.4, "text": " I would like to note that the baseline here is not the state of the art, and with that it is a little easier to make the new solution pop."}, {"start": 324.4, "end": 329.4, "text": " No matter the solutions are still good, but I think this is worth a note."}, {"start": 329.4, "end": 340.4, "text": " So, from now on, whenever we create a new level in a computer game, we can have hundreds of competent AI players testing it in real time."}, {"start": 340.4, "end": 347.4, "text": " So good. What a time to be alive! This episode has been supported by Wades and Biasis."}, {"start": 347.4, "end": 353.4, "text": " In this post, they show you how to use their tool to interpret the results of your neural networks."}, {"start": 353.4, "end": 361.4, "text": " For instance, they tell you how to look if your neural network has even looked at the dog in an image before classifying it to be a dog."}, {"start": 361.4, "end": 381.4, "text": " If you work with learning algorithms on a regular basis, make sure to check out Wades and Biasis. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects."}, {"start": 381.4, "end": 390.4, "text": " This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions."}, {"start": 390.4, "end": 399.4, "text": " Make sure to visit them through wnbe.com slash papers, or just click the link in the video description, and you can get a free demo today."}, {"start": 399.4, "end": 405.4, "text": " Our thanks to Wades and Biasis for their longstanding support, and for helping us make better videos for you."}, {"start": 405.4, "end": 429.4, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=81rBzfbFLiE
OpenAI Codex: Your Robot Assistant! 🤖
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Evaluating Large Language Models Trained on Code" is available here: https://openai.com/blog/openai-codex/ Codex tweet/application links: Explaining code: https://twitter.com/CristiVlad25/status/1432017112885833734 Pong game: https://twitter.com/slava__bobrov/status/1425904829013102602 Blender Scripting: https://www.youtube.com/watch?v=MvHbrVfEuyk GPT-3 tweet/application links: Website layout: https://twitter.com/sharifshameem/status/1283322990625607681 Plots: https://twitter.com/aquariusacquah/status/1285415144017797126?s=12 Typesetting math: https://twitter.com/sh_reya/status/1284746918959239168 Population data: https://twitter.com/pavtalk/status/1285410751092416513 Legalese: https://twitter.com/f_j_j_/status/1283848393832333313 Nutrition labels: https://twitter.com/lawderpaul/status/1284972517749338112 User interface design: https://twitter.com/jsngr/status/1284511080715362304 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #openai #codex #copilot
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to see if an AI can become a good software engineer. Spoiler alert, the answer is yes, kind of. Let me explain. Just one year ago, scientists at OpenAI published a technique by the name GPT-3 and it is an AI that was unleashed to read the internet with the sole task of finishing your sentences. So, what happened then? Well, now we know that of course it learned whatever it needed to learn to perform the sentence completion properly. And to do this, it would need to learn English by itself, and that's exactly what it did. It also learned about a lot of topics to be able to discuss them well. We gave it a try and I was somewhat surprised when I saw that it was able to continue a two minute paper script, even though it seems to have turned into a history lesson. It also learned how to generate properly formatted plots from a tiny prompt written in plain English, not just one kind, many kinds. And remember, this happened just about a year ago, and this AI was pretty good at many things. But soon after a newer work was published by the name Image GPT. What did this do? Well, this was a GPT variant that could not finish your sentences, but your images. Yes, really. The problem statement is simple, we give it an incomplete image, and we ask the AI to fill in the missing pixels. Have a look at this water droplet example. We humans know that since we see the remnants of some ripples over there too, there must be a splash, but does the AI know? Oh yes, yes it does. Amazing. And this is the true image for reference. So, what did they come out with now? Well, the previous GPT was pretty good at many things, and this new work OpenAI Codex is a GPT language model that was fine tuned to be excellent at one thing. And that is writing computer programs or finishing your code. Sounds good. Let's give it a try. First, please write a program that says Hello World 5 times. It can do that. And we can also ask it to create a graphical user interface for it. No coding skills required. That's not bad by any means, but this is OpenAI we are talking about, so I am sure it can do even better. Let's try something a tiny bit more challenging. For instance, writing a simple space game. First, we get an image of a spaceship that we like, then instruct the algorithm to resize and crop it. And here comes one of my favorites, start animating it. Look, it immediately wrote the appropriate code where it will travel with a prescribed speed. And yes, it should get flipped as soon as it hits the wall. Looks good. But will it work? Let's see. It does. And all this from a written English description. Outstanding. Of course, this is still not quite the physics simulation that you all see and love around here, but I'll take it. But this is still not a game, so please add a moving asteroid, check for collisions and infuse the game with a scoring system. And there we go. So how long did all this take? And now hold on to your papers because this game was written in approximately 9 minutes. No coding knowledge is required. Wow. What a time to be alive. Now, in these 9-ish minutes, most of the time was not spent by the AI thinking, but the human typing. So still, the human is the bottleneck. But today, with all the amazing voice recognition systems that we have, we don't even need to type these instructions. Just say what you want and it will be able to do it. So, what else can it do? For instance, it can also deal with similar requests to what software engineers are asked in interviews. And I have to say the results indicate that this AI would get hired to some places. But that's not all. It can also nail the first grade math test. An AI. Food for thought. Now, this open AI codex work has been out there for a few days now, and I decided not to cover it immediately, but wait a little and see where the users take it. This is, of course, not great for views, but no matter we are not maximizing views, we are maximizing meaning. In return, there are some examples out there in the world. Let's look at three of them. One, it can be asked to explain a piece of code, even if it is written in assembly. Two, it can create a pong game in 30 seconds. Remember, this used to be a blockbuster Atari game, and now an AI can write it in half a minute. And yes, again, most of the half minute is taken by waiting for the human for instructions. Wow! It can also create a plugin for Blender, an amazing, free 3D modeler program. These things used to take several hours of work at the very least. And with that, I feel that what I said for GPT3 rings even more true today. I am replacing GPT3 with codex and quoting. The main point is that working with codex is a really peculiar process, where we know that a vast body of knowledge lies within, but it only emerges if we can bring it out with properly written prompts. It almost feels like a new kind of programming that is open to everyone, even people without any programming or technical knowledge. If a computer is a bicycle for the mind, then codex is a fighter jet. And all this progress in just one year. I cannot wait to see where you follow scholars will take it, and what open AI has in mind for just one more paper down the line. And until then, software coding might soon be a thing anyone can do. What a time to be alive! PerceptiLabs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilabs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 10.64, "text": " Today we are going to see if an AI can become a good software engineer."}, {"start": 10.64, "end": 15.84, "text": " Spoiler alert, the answer is yes, kind of."}, {"start": 15.84, "end": 17.2, "text": " Let me explain."}, {"start": 17.2, "end": 24.080000000000002, "text": " Just one year ago, scientists at OpenAI published a technique by the name GPT-3"}, {"start": 24.08, "end": 32.48, "text": " and it is an AI that was unleashed to read the internet with the sole task of finishing your sentences."}, {"start": 32.48, "end": 34.879999999999995, "text": " So, what happened then?"}, {"start": 34.879999999999995, "end": 42.959999999999994, "text": " Well, now we know that of course it learned whatever it needed to learn to perform the sentence completion properly."}, {"start": 42.959999999999994, "end": 49.519999999999996, "text": " And to do this, it would need to learn English by itself, and that's exactly what it did."}, {"start": 49.52, "end": 54.24, "text": " It also learned about a lot of topics to be able to discuss them well."}, {"start": 54.24, "end": 62.24, "text": " We gave it a try and I was somewhat surprised when I saw that it was able to continue a two minute paper script,"}, {"start": 62.24, "end": 66.56, "text": " even though it seems to have turned into a history lesson."}, {"start": 66.56, "end": 74.24000000000001, "text": " It also learned how to generate properly formatted plots from a tiny prompt written in plain English,"}, {"start": 74.24000000000001, "end": 77.36, "text": " not just one kind, many kinds."}, {"start": 77.36, "end": 84.16, "text": " And remember, this happened just about a year ago, and this AI was pretty good at many things."}, {"start": 84.96, "end": 90.16, "text": " But soon after a newer work was published by the name Image GPT."}, {"start": 90.72, "end": 91.6, "text": " What did this do?"}, {"start": 92.4, "end": 98.4, "text": " Well, this was a GPT variant that could not finish your sentences, but your images."}, {"start": 99.2, "end": 100.72, "text": " Yes, really."}, {"start": 100.72, "end": 108.64, "text": " The problem statement is simple, we give it an incomplete image, and we ask the AI to fill in the missing pixels."}, {"start": 109.6, "end": 111.84, "text": " Have a look at this water droplet example."}, {"start": 112.64, "end": 118.08, "text": " We humans know that since we see the remnants of some ripples over there too,"}, {"start": 118.08, "end": 121.75999999999999, "text": " there must be a splash, but does the AI know?"}, {"start": 123.03999999999999, "end": 126.0, "text": " Oh yes, yes it does. Amazing."}, {"start": 126.8, "end": 129.44, "text": " And this is the true image for reference."}, {"start": 129.44, "end": 132.48, "text": " So, what did they come out with now?"}, {"start": 132.48, "end": 140.48, "text": " Well, the previous GPT was pretty good at many things, and this new work OpenAI Codex"}, {"start": 140.48, "end": 145.6, "text": " is a GPT language model that was fine tuned to be excellent at one thing."}, {"start": 145.6, "end": 150.96, "text": " And that is writing computer programs or finishing your code."}, {"start": 150.96, "end": 153.92, "text": " Sounds good. Let's give it a try."}, {"start": 153.92, "end": 159.6, "text": " First, please write a program that says Hello World 5 times."}, {"start": 161.6, "end": 162.56, "text": " It can do that."}, {"start": 162.56, "end": 167.83999999999997, "text": " And we can also ask it to create a graphical user interface for it."}, {"start": 167.83999999999997, "end": 170.88, "text": " No coding skills required."}, {"start": 170.88, "end": 176.56, "text": " That's not bad by any means, but this is OpenAI we are talking about,"}, {"start": 176.56, "end": 179.35999999999999, "text": " so I am sure it can do even better."}, {"start": 179.35999999999999, "end": 182.56, "text": " Let's try something a tiny bit more challenging."}, {"start": 182.56, "end": 185.92000000000002, "text": " For instance, writing a simple space game."}, {"start": 185.92000000000002, "end": 189.92000000000002, "text": " First, we get an image of a spaceship that we like,"}, {"start": 189.92000000000002, "end": 193.6, "text": " then instruct the algorithm to resize and crop it."}, {"start": 193.6, "end": 198.96, "text": " And here comes one of my favorites, start animating it."}, {"start": 198.96, "end": 205.2, "text": " Look, it immediately wrote the appropriate code where it will travel with a prescribed speed."}, {"start": 205.2, "end": 209.68, "text": " And yes, it should get flipped as soon as it hits the wall."}, {"start": 209.68, "end": 211.2, "text": " Looks good."}, {"start": 211.2, "end": 214.56, "text": " But will it work? Let's see."}, {"start": 216.16, "end": 216.72, "text": " It does."}, {"start": 217.35999999999999, "end": 220.23999999999998, "text": " And all this from a written English description."}, {"start": 220.88, "end": 221.76, "text": " Outstanding."}, {"start": 222.39999999999998, "end": 227.92, "text": " Of course, this is still not quite the physics simulation that you all see and love around here,"}, {"start": 228.48, "end": 229.6, "text": " but I'll take it."}, {"start": 230.95999999999998, "end": 235.76, "text": " But this is still not a game, so please add a moving asteroid,"}, {"start": 235.76, "end": 240.0, "text": " check for collisions and infuse the game with a scoring system."}, {"start": 240.0, "end": 241.44, "text": " And there we go."}, {"start": 242.08, "end": 244.48, "text": " So how long did all this take?"}, {"start": 245.28, "end": 251.6, "text": " And now hold on to your papers because this game was written in approximately 9 minutes."}, {"start": 252.48, "end": 254.4, "text": " No coding knowledge is required."}, {"start": 255.28, "end": 255.68, "text": " Wow."}, {"start": 256.56, "end": 257.92, "text": " What a time to be alive."}, {"start": 258.8, "end": 264.4, "text": " Now, in these 9-ish minutes, most of the time was not spent by the AI thinking,"}, {"start": 264.4, "end": 266.16, "text": " but the human typing."}, {"start": 266.16, "end": 269.28000000000003, "text": " So still, the human is the bottleneck."}, {"start": 269.28000000000003, "end": 274.16, "text": " But today, with all the amazing voice recognition systems that we have,"}, {"start": 274.16, "end": 277.44, "text": " we don't even need to type these instructions."}, {"start": 277.44, "end": 281.20000000000005, "text": " Just say what you want and it will be able to do it."}, {"start": 286.0, "end": 287.92, "text": " So, what else can it do?"}, {"start": 287.92, "end": 294.88, "text": " For instance, it can also deal with similar requests to what software engineers are asked in interviews."}, {"start": 294.88, "end": 301.12, "text": " And I have to say the results indicate that this AI would get hired to some places."}, {"start": 302.64, "end": 303.92, "text": " But that's not all."}, {"start": 303.92, "end": 307.04, "text": " It can also nail the first grade math test."}, {"start": 308.15999999999997, "end": 308.88, "text": " An AI."}, {"start": 309.52, "end": 310.24, "text": " Food for thought."}, {"start": 311.12, "end": 315.52, "text": " Now, this open AI codex work has been out there for a few days now,"}, {"start": 315.52, "end": 322.96, "text": " and I decided not to cover it immediately, but wait a little and see where the users take it."}, {"start": 322.96, "end": 328.47999999999996, "text": " This is, of course, not great for views, but no matter we are not maximizing views,"}, {"start": 328.47999999999996, "end": 330.15999999999997, "text": " we are maximizing meaning."}, {"start": 330.79999999999995, "end": 333.68, "text": " In return, there are some examples out there in the world."}, {"start": 334.32, "end": 335.76, "text": " Let's look at three of them."}, {"start": 336.47999999999996, "end": 342.4, "text": " One, it can be asked to explain a piece of code, even if it is written in assembly."}, {"start": 343.67999999999995, "end": 347.91999999999996, "text": " Two, it can create a pong game in 30 seconds."}, {"start": 347.92, "end": 356.08000000000004, "text": " Remember, this used to be a blockbuster Atari game, and now an AI can write it in half a minute."}, {"start": 356.88, "end": 363.20000000000005, "text": " And yes, again, most of the half minute is taken by waiting for the human for instructions."}, {"start": 364.40000000000003, "end": 364.72, "text": " Wow!"}, {"start": 365.6, "end": 371.04, "text": " It can also create a plugin for Blender, an amazing, free 3D modeler program."}, {"start": 371.68, "end": 375.84000000000003, "text": " These things used to take several hours of work at the very least."}, {"start": 375.84, "end": 382.64, "text": " And with that, I feel that what I said for GPT3 rings even more true today."}, {"start": 382.64, "end": 387.28, "text": " I am replacing GPT3 with codex and quoting."}, {"start": 387.28, "end": 392.32, "text": " The main point is that working with codex is a really peculiar process,"}, {"start": 392.32, "end": 395.91999999999996, "text": " where we know that a vast body of knowledge lies within,"}, {"start": 395.91999999999996, "end": 400.47999999999996, "text": " but it only emerges if we can bring it out with properly written prompts."}, {"start": 400.48, "end": 407.6, "text": " It almost feels like a new kind of programming that is open to everyone, even people without any"}, {"start": 407.6, "end": 414.40000000000003, "text": " programming or technical knowledge. If a computer is a bicycle for the mind, then codex is a fighter jet."}, {"start": 414.40000000000003, "end": 418.24, "text": " And all this progress in just one year."}, {"start": 418.24, "end": 422.24, "text": " I cannot wait to see where you follow scholars will take it,"}, {"start": 422.24, "end": 426.64000000000004, "text": " and what open AI has in mind for just one more paper down the line."}, {"start": 426.64, "end": 432.15999999999997, "text": " And until then, software coding might soon be a thing anyone can do."}, {"start": 432.15999999999997, "end": 438.32, "text": " What a time to be alive! PerceptiLabs is a visual API for TensorFlow,"}, {"start": 438.32, "end": 442.8, "text": " carefully designed to make machine learning as intuitive as possible."}, {"start": 442.8, "end": 448.15999999999997, "text": " This gives you a faster way to build out models with more transparency into how your model is"}, {"start": 448.15999999999997, "end": 454.4, "text": " architected, how it performs, and how to debug it. Look, it lets you toggle between the visual"}, {"start": 454.4, "end": 460.4, "text": " modeler and the code editor. It even generates visualizations for all the model variables,"}, {"start": 460.4, "end": 466.56, "text": " and gives you recommendations both during modeling and training, and does all this automatically."}, {"start": 466.56, "end": 472.47999999999996, "text": " I only wish I had a tool like this when I was working on my neural networks during my PhD years."}, {"start": 472.47999999999996, "end": 479.28, "text": " Visit perceptilabs.com slash papers to easily install the free local version of their system today."}, {"start": 479.28, "end": 484.55999999999995, "text": " Our thanks to perceptilabs for their support, and for helping us make better videos for you."}, {"start": 484.56, "end": 514.4, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6hkiTejoyms
Watch Tesla’s Self-Driving Car Learn In a Simulation! 🚘
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #Tesla #TeslaAIDay
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to see how Tesla uses no less than a simulated game world to train their self-driving cars. And more. In their AI Day presentation video, they really put up a clinic of recent AI research results and how they apply them to develop self-driving cars. And, of course, there is plenty of coverage of the event, but as always, we are going to look at it from a different angle. We are doing it paper style. Why? Because after nearly every two-minute paper's episode, where we showcase an amazing paper, I get a question saying something like, okay, but when do I get to see or use this in the real world? And rightfully so, that is a good question. And in this presentation, you will see that these papers that you see here get transferred into real-world products so fast it really makes my head spin. Let's see this effect demonstrated by looking through their system. Now, first, their cars have many cameras, no depth information, just the pixels from these cameras, and one of their goals is to create this vector space view that you see here. That is almost like a map or a video game version of the real roads and objects around us. That is a very difficult problem. Why is that? Because the car has many cameras. Is that a problem? Yes, kind of. I'll explain in a moment. You see, there is a bottom layer that processes the raw sensor data from the cameras mounted on the vehicle. So, here, in-go the raw pixels and outcomes more useful, high-level information that can be used to determine whether this clump of pixels is a car or a traffic light. Then, in the upper layers, this data can be used for more specific tasks, for instance, trying to estimate where the lanes and curbs are. So, what papers are used to accomplish this? Looking through the architecture diagrams, we see transformer neural networks, BIFPNs, and Ragnet. All papers from the last few years, for instance, Ragnet is a neural network variant that is great at extracting spatial temporal information from the raw sensor data. And that is a paper from 2020, from just one year ago, already actively used in training self-driving cars. That is unreal. Now, we mentioned that having many cameras is a bit of a problem. Why is that? Isn't that supposed to be a good thing? Well, look, each of the cameras only sees parts of the truck. So, how do we know where exactly it is and how long it is? We need to know all this information to be able to accurately put the truck into the vector space view. What we need for this is a technique that can fuse information from many cameras together intelligently. Note that this is devilishly difficult due to each of the cameras having a different calibration, location, view directions, and other properties. So, who is to tell that the point here corresponds to which point in a different camera view? And this is accomplished through, yes, a transformer neural network, a paper from 2017. So, does this multi-camera technique work? Does this improve anything? Well, let's see. Oh, yes, the yellow predictions here are from the previous single camera network, and as you see, unfortunately, things flicker in and out of existence. Why is that? It is because a passing car is leaving the view of one of the cameras and as it enters the view of the next one, they don't have this correspondance technique that would say where it is exactly. And, look, the blue objects show the prediction of the multi-camera network that can do that, and things aren't perfect, but they are significantly better than the single camera network. That is great. However, we are still not taking into consideration time. Why is that important? Let's have a look at two examples. One, if we are only looking at still images and not taking into consideration how they change over time, how do we know if this car is stationary? Is it about to park somewhere or is it speeding? Also, two, this car is now occluded, but we saw it a second ago, so we should know what it is up to. That sounds great, and what else can we do if our self-driving system has a concept of time? Well, much like humans do, we can make predictions. These predictions can take place both in terms of mapping what is likely to come, an intersection around about, and so on. But, perhaps even more importantly, we can also make predictions about vehicle behavior. Let's see how that works. The green lines show how far away the next vehicle is, and how fast it is going. The green line tells us the real true information about it. Do you see the green? No, that's right, it is barely visible because it is occluded by a blue line, which is the prediction of the new video network. That means that its predictions are barely off from the real velocities and distances, which is absolutely amazing. And as you see with orange, the old network that was based on single images is off by quite a bit. So now, a single car can make a rough map of its environment wherever it drives, and they can also stage the readings of multiple cars together into an even more accurate map. Putting this all together, these cars have a proper understanding of their environment, and this makes navigation much easier. Look at those crisp, temporally stable labelling. It has very little flickering. Still not perfect by any means, but this is remarkable progress in so little time. And we are at the point where predicting the behaviors of other vehicles and pedestrians can also lead to better decision making. But we are not done yet. Not even close. Look, the sad truth of driving is that unexpected things happen. For instance, this truck makes it very difficult for us to see, and the self-driving system does not have a lot of data to deal with that. So, what is a possible solution to that? There are two solutions. One is fetching more training data. One car can submit an unexpected event and request that the entire Tesla fleet sends over if they have encountered something similar. Since there are so many of these cars on the streets, tens of thousands of similar examples can be fetched from them and added to the training data to improve the entire fleet. That is mind-blowing. One car encounters a difficult situation, and then every car can learn from it. How cool is that? That sounds great. So, what is the second solution? Not fetching more training data, but creating more training data. What? Just make stuff up? Yes, that's exactly right. And if you think that is ridiculous, and are asking, how could that possibly work? Well, hold on to your papers because it does work. You are looking at it right now. Yes, this is a photorealistic simulation that teaches self-driving cars to handle difficult corner cases better. In the real world, we can learn from things that already happened, but in a simulation, we can make anything happen. This concept really works, and one of my favorite examples is OpenAI's robot hand that we have showcased earlier in this series. This also learns the rotation techniques in a simulation, and it does it so well that the software can be uploaded to a real robot hand, and it will work in real situations too. And now, the same concept for self-driving cars, loving it. With these simulations, we can even teach these cars about cases that would otherwise be impossible, or unsafe to test. For instance, in this system, the car can safely learn what it should do if it sees people and dogs running on the highway. A capable artist can also create miles and miles of these virtual locations within a day of work. This simulation technique is truly a treasure trove of data because it can also be procedurally generated, and the moment the self-driving system makes an incorrect decision, a Tesla employee can immediately create an endless set of similar situations to teach it. Now, I don't know if you remember, we talked about a fantastic paper, a couple months ago, that looked at real-world videos, then took video footage from a game, and improved it to look like the real world. Convert video games to reality if you will. This had an interesting limitation. For instance, since the AI was trained on the beautiful lush hills of Germany and Austria, it hasn't really seen the dry hills of LA. So, what does it do with them? Look, it redrewed the hills the only way it saw hills exist, which is covered with trees. So, what does this have to do with Tesla's self-driving cars? Well, if you have been holding onto your paper so far, now squeeze that paper because they went the other way around. Yes, that's right. They take video footage of a real unexpected event where the self-driving system failed, use the automatic labeler used for the vector space view, and what do they make out of it? A video game version. Holy matter of papers. And in this video game, it is suddenly much easier to teach the algorithm safely. You can also make it easier, harder, replace a car with a dog or a pack of dogs, and make many similar examples so that the AI can learn from this what if situations as much as possible. So, there you go. Full-tech transfer into a real AI system in just a year or two. So, yes, the papers you see here are for real, as real as it gets. And, yes, the robot is not real, just a silly joke. For now. And two more things that make all this even more mind-blowing. One, remember, they don't showcase the latest and greatest that they have, just imagine that everything you heard today is old news compared to the tech they have now. And two, we have only looked at just one side of what is going on. For instance, we haven't even talked about their amazing dojo chip. And if all this comes to fruition, we will be able to travel cheaper, more relaxed, and also, perhaps most importantly, safer. I can't wait. I really cannot wait. What a time to be alive. This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.76, "end": 13.44, "text": " Today, we are going to see how Tesla uses no less than a simulated game world to train their self-driving cars."}, {"start": 13.44, "end": 14.72, "text": " And more."}, {"start": 14.72, "end": 20.8, "text": " In their AI Day presentation video, they really put up a clinic of recent AI research results"}, {"start": 20.8, "end": 24.72, "text": " and how they apply them to develop self-driving cars."}, {"start": 24.72, "end": 32.72, "text": " And, of course, there is plenty of coverage of the event, but as always, we are going to look at it from a different angle."}, {"start": 32.72, "end": 35.28, "text": " We are doing it paper style."}, {"start": 35.28, "end": 36.32, "text": " Why?"}, {"start": 36.32, "end": 44.56, "text": " Because after nearly every two-minute paper's episode, where we showcase an amazing paper, I get a question saying something like,"}, {"start": 44.56, "end": 49.84, "text": " okay, but when do I get to see or use this in the real world?"}, {"start": 49.84, "end": 53.599999999999994, "text": " And rightfully so, that is a good question."}, {"start": 53.6, "end": 63.84, "text": " And in this presentation, you will see that these papers that you see here get transferred into real-world products so fast it really makes my head spin."}, {"start": 63.84, "end": 68.0, "text": " Let's see this effect demonstrated by looking through their system."}, {"start": 68.0, "end": 75.44, "text": " Now, first, their cars have many cameras, no depth information, just the pixels from these cameras,"}, {"start": 75.44, "end": 80.48, "text": " and one of their goals is to create this vector space view that you see here."}, {"start": 80.48, "end": 87.44, "text": " That is almost like a map or a video game version of the real roads and objects around us."}, {"start": 87.44, "end": 90.16, "text": " That is a very difficult problem."}, {"start": 90.16, "end": 94.64, "text": " Why is that? Because the car has many cameras."}, {"start": 94.64, "end": 96.56, "text": " Is that a problem?"}, {"start": 96.56, "end": 100.56, "text": " Yes, kind of. I'll explain in a moment."}, {"start": 100.56, "end": 107.84, "text": " You see, there is a bottom layer that processes the raw sensor data from the cameras mounted on the vehicle."}, {"start": 107.84, "end": 120.64, "text": " So, here, in-go the raw pixels and outcomes more useful, high-level information that can be used to determine whether this clump of pixels is a car or a traffic light."}, {"start": 120.64, "end": 130.32, "text": " Then, in the upper layers, this data can be used for more specific tasks, for instance, trying to estimate where the lanes and curbs are."}, {"start": 130.32, "end": 133.84, "text": " So, what papers are used to accomplish this?"}, {"start": 133.84, "end": 142.0, "text": " Looking through the architecture diagrams, we see transformer neural networks, BIFPNs, and Ragnet."}, {"start": 142.0, "end": 153.6, "text": " All papers from the last few years, for instance, Ragnet is a neural network variant that is great at extracting spatial temporal information from the raw sensor data."}, {"start": 153.6, "end": 163.76, "text": " And that is a paper from 2020, from just one year ago, already actively used in training self-driving cars."}, {"start": 163.76, "end": 166.16, "text": " That is unreal."}, {"start": 166.16, "end": 170.72, "text": " Now, we mentioned that having many cameras is a bit of a problem."}, {"start": 170.72, "end": 175.04, "text": " Why is that? Isn't that supposed to be a good thing?"}, {"start": 175.04, "end": 180.56, "text": " Well, look, each of the cameras only sees parts of the truck."}, {"start": 180.56, "end": 185.44, "text": " So, how do we know where exactly it is and how long it is?"}, {"start": 185.44, "end": 192.07999999999998, "text": " We need to know all this information to be able to accurately put the truck into the vector space view."}, {"start": 192.08, "end": 199.52, "text": " What we need for this is a technique that can fuse information from many cameras together intelligently."}, {"start": 199.52, "end": 210.96, "text": " Note that this is devilishly difficult due to each of the cameras having a different calibration, location, view directions, and other properties."}, {"start": 210.96, "end": 217.68, "text": " So, who is to tell that the point here corresponds to which point in a different camera view?"}, {"start": 217.68, "end": 225.76000000000002, "text": " And this is accomplished through, yes, a transformer neural network, a paper from 2017."}, {"start": 225.76000000000002, "end": 231.36, "text": " So, does this multi-camera technique work? Does this improve anything?"}, {"start": 231.36, "end": 233.52, "text": " Well, let's see."}, {"start": 233.52, "end": 239.04000000000002, "text": " Oh, yes, the yellow predictions here are from the previous single camera network,"}, {"start": 239.04000000000002, "end": 245.44, "text": " and as you see, unfortunately, things flicker in and out of existence."}, {"start": 245.44, "end": 251.52, "text": " Why is that? It is because a passing car is leaving the view of one of the cameras"}, {"start": 251.52, "end": 259.92, "text": " and as it enters the view of the next one, they don't have this correspondance technique that would say where it is exactly."}, {"start": 259.92, "end": 267.04, "text": " And, look, the blue objects show the prediction of the multi-camera network that can do that,"}, {"start": 267.04, "end": 274.08, "text": " and things aren't perfect, but they are significantly better than the single camera network."}, {"start": 274.08, "end": 279.84, "text": " That is great. However, we are still not taking into consideration time."}, {"start": 280.8, "end": 284.56, "text": " Why is that important? Let's have a look at two examples."}, {"start": 285.28, "end": 292.15999999999997, "text": " One, if we are only looking at still images and not taking into consideration how they change over time,"}, {"start": 292.88, "end": 299.91999999999996, "text": " how do we know if this car is stationary? Is it about to park somewhere or is it speeding?"}, {"start": 299.92, "end": 308.24, "text": " Also, two, this car is now occluded, but we saw it a second ago, so we should know what it is up to."}, {"start": 309.52000000000004, "end": 315.92, "text": " That sounds great, and what else can we do if our self-driving system has a concept of time?"}, {"start": 316.56, "end": 320.16, "text": " Well, much like humans do, we can make predictions."}, {"start": 320.72, "end": 325.92, "text": " These predictions can take place both in terms of mapping what is likely to come,"}, {"start": 325.92, "end": 329.68, "text": " an intersection around about, and so on."}, {"start": 329.68, "end": 336.72, "text": " But, perhaps even more importantly, we can also make predictions about vehicle behavior."}, {"start": 336.72, "end": 342.96000000000004, "text": " Let's see how that works. The green lines show how far away the next vehicle is,"}, {"start": 342.96000000000004, "end": 349.68, "text": " and how fast it is going. The green line tells us the real true information about it."}, {"start": 349.68, "end": 358.72, "text": " Do you see the green? No, that's right, it is barely visible because it is occluded by a blue line,"}, {"start": 358.72, "end": 364.96000000000004, "text": " which is the prediction of the new video network. That means that its predictions are barely off"}, {"start": 364.96000000000004, "end": 371.84000000000003, "text": " from the real velocities and distances, which is absolutely amazing. And as you see with orange,"}, {"start": 371.84000000000003, "end": 376.24, "text": " the old network that was based on single images is off by quite a bit."}, {"start": 376.24, "end": 382.56, "text": " So now, a single car can make a rough map of its environment wherever it drives,"}, {"start": 382.56, "end": 388.96000000000004, "text": " and they can also stage the readings of multiple cars together into an even more accurate map."}, {"start": 388.96000000000004, "end": 395.44, "text": " Putting this all together, these cars have a proper understanding of their environment,"}, {"start": 395.44, "end": 401.76, "text": " and this makes navigation much easier. Look at those crisp, temporally stable"}, {"start": 401.76, "end": 408.4, "text": " labelling. It has very little flickering. Still not perfect by any means,"}, {"start": 408.4, "end": 415.12, "text": " but this is remarkable progress in so little time. And we are at the point where predicting"}, {"start": 415.12, "end": 420.8, "text": " the behaviors of other vehicles and pedestrians can also lead to better decision making."}, {"start": 421.68, "end": 429.59999999999997, "text": " But we are not done yet. Not even close. Look, the sad truth of driving is that unexpected things"}, {"start": 429.6, "end": 435.6, "text": " happen. For instance, this truck makes it very difficult for us to see, and the self-driving"}, {"start": 435.6, "end": 442.40000000000003, "text": " system does not have a lot of data to deal with that. So, what is a possible solution to that?"}, {"start": 443.20000000000005, "end": 450.64000000000004, "text": " There are two solutions. One is fetching more training data. One car can submit an unexpected"}, {"start": 450.64000000000004, "end": 458.24, "text": " event and request that the entire Tesla fleet sends over if they have encountered something similar."}, {"start": 458.24, "end": 464.48, "text": " Since there are so many of these cars on the streets, tens of thousands of similar examples"}, {"start": 464.48, "end": 470.0, "text": " can be fetched from them and added to the training data to improve the entire fleet."}, {"start": 470.8, "end": 478.72, "text": " That is mind-blowing. One car encounters a difficult situation, and then every car can learn from it."}, {"start": 479.44, "end": 485.12, "text": " How cool is that? That sounds great. So, what is the second solution?"}, {"start": 485.12, "end": 490.64, "text": " Not fetching more training data, but creating more training data."}, {"start": 490.64, "end": 498.64, "text": " What? Just make stuff up? Yes, that's exactly right. And if you think that is ridiculous,"}, {"start": 498.64, "end": 505.44, "text": " and are asking, how could that possibly work? Well, hold on to your papers because it does work."}, {"start": 505.44, "end": 512.4, "text": " You are looking at it right now. Yes, this is a photorealistic simulation that teaches"}, {"start": 512.4, "end": 519.28, "text": " self-driving cars to handle difficult corner cases better. In the real world, we can learn from"}, {"start": 519.28, "end": 524.3199999999999, "text": " things that already happened, but in a simulation, we can make anything happen."}, {"start": 525.04, "end": 530.72, "text": " This concept really works, and one of my favorite examples is OpenAI's robot hand"}, {"start": 530.72, "end": 536.0, "text": " that we have showcased earlier in this series. This also learns the rotation techniques in a"}, {"start": 536.0, "end": 542.8, "text": " simulation, and it does it so well that the software can be uploaded to a real robot hand,"}, {"start": 542.8, "end": 550.88, "text": " and it will work in real situations too. And now, the same concept for self-driving cars, loving it."}, {"start": 552.08, "end": 558.4, "text": " With these simulations, we can even teach these cars about cases that would otherwise be impossible,"}, {"start": 558.4, "end": 565.68, "text": " or unsafe to test. For instance, in this system, the car can safely learn what it should do if"}, {"start": 565.68, "end": 573.52, "text": " it sees people and dogs running on the highway. A capable artist can also create miles and miles"}, {"start": 573.52, "end": 580.0799999999999, "text": " of these virtual locations within a day of work. This simulation technique is truly a treasure trove"}, {"start": 580.0799999999999, "end": 587.12, "text": " of data because it can also be procedurally generated, and the moment the self-driving system"}, {"start": 587.12, "end": 593.52, "text": " makes an incorrect decision, a Tesla employee can immediately create an endless set of similar"}, {"start": 593.52, "end": 600.0, "text": " situations to teach it. Now, I don't know if you remember, we talked about a fantastic paper,"}, {"start": 600.0, "end": 607.68, "text": " a couple months ago, that looked at real-world videos, then took video footage from a game,"}, {"start": 607.68, "end": 614.0, "text": " and improved it to look like the real world. Convert video games to reality if you will."}, {"start": 614.72, "end": 620.96, "text": " This had an interesting limitation. For instance, since the AI was trained on the beautiful"}, {"start": 620.96, "end": 626.48, "text": " lush hills of Germany and Austria, it hasn't really seen the dry hills of LA."}, {"start": 627.52, "end": 634.88, "text": " So, what does it do with them? Look, it redrewed the hills the only way it saw hills exist,"}, {"start": 634.88, "end": 641.6, "text": " which is covered with trees. So, what does this have to do with Tesla's self-driving cars?"}, {"start": 642.32, "end": 648.64, "text": " Well, if you have been holding onto your paper so far, now squeeze that paper because they went"}, {"start": 648.64, "end": 656.24, "text": " the other way around. Yes, that's right. They take video footage of a real unexpected event"}, {"start": 656.24, "end": 662.3199999999999, "text": " where the self-driving system failed, use the automatic labeler used for the vector space view,"}, {"start": 662.96, "end": 666.8, "text": " and what do they make out of it? A video game version."}, {"start": 667.92, "end": 675.2, "text": " Holy matter of papers. And in this video game, it is suddenly much easier to teach the algorithm"}, {"start": 675.2, "end": 682.8000000000001, "text": " safely. You can also make it easier, harder, replace a car with a dog or a pack of dogs,"}, {"start": 682.8000000000001, "end": 689.76, "text": " and make many similar examples so that the AI can learn from this what if situations as much as"}, {"start": 689.76, "end": 697.9200000000001, "text": " possible. So, there you go. Full-tech transfer into a real AI system in just a year or two."}, {"start": 697.92, "end": 707.28, "text": " So, yes, the papers you see here are for real, as real as it gets. And, yes, the robot is not real,"}, {"start": 707.28, "end": 715.4399999999999, "text": " just a silly joke. For now. And two more things that make all this even more mind-blowing. One,"}, {"start": 715.4399999999999, "end": 721.12, "text": " remember, they don't showcase the latest and greatest that they have, just imagine that everything"}, {"start": 721.12, "end": 728.5600000000001, "text": " you heard today is old news compared to the tech they have now. And two, we have only looked at"}, {"start": 728.5600000000001, "end": 734.5600000000001, "text": " just one side of what is going on. For instance, we haven't even talked about their amazing"}, {"start": 734.5600000000001, "end": 742.48, "text": " dojo chip. And if all this comes to fruition, we will be able to travel cheaper, more relaxed,"}, {"start": 742.48, "end": 750.72, "text": " and also, perhaps most importantly, safer. I can't wait. I really cannot wait. What a time to"}, {"start": 750.72, "end": 757.0400000000001, "text": " be alive. This video has been supported by weights and biases. Check out the recent offering"}, {"start": 757.0400000000001, "end": 762.64, "text": " fully connected, a place where they bring machine learning practitioners together to share"}, {"start": 762.64, "end": 769.6800000000001, "text": " and discuss their ideas, learn from industry leaders, and even collaborate on projects together."}, {"start": 769.6800000000001, "end": 774.88, "text": " You see, I get messages from you fellow scholars telling me that you have been inspired by the"}, {"start": 774.88, "end": 782.16, "text": " series, but don't really know where to start. And here it is. Fully connected is a great way"}, {"start": 782.16, "end": 789.12, "text": " to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference,"}, {"start": 789.12, "end": 796.4, "text": " and more. Make sure to visit them through wnb.me slash papers or just click the link in the video"}, {"start": 796.4, "end": 802.4, "text": " description. Our thanks to weights and biases for their longstanding support and for helping us"}, {"start": 802.4, "end": 807.1999999999999, "text": " make better videos for you. Thanks for watching and for your generous support, and I'll see you"}, {"start": 807.2, "end": 837.0400000000001, "text": " next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_9Bli4zCzZY
This AI Creates Virtual Fingers! 🤝
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation" is available here: https://github.com/cghezhang/ManipNet ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-5859606/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #vr
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Ifehir. Today, we are going to see how we can use our hands, but not our fingers to mingle with objects in virtual worlds. The promise of virtual reality VR is indeed truly incredible. If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment, expose astronauts to virtual zero-gravity simulations, work together with telepresence applications, you name it. The dream is getting closer and closer, but something is still missing. For instance, this previous work uses a learning-based algorithm to teach a head-mounted camera to tell the orientation of our hands at all times. One more paper down the line, this technique appeared that can deal with examples with challenging hand-hand interactions, deformations, lots of self-contact, and self-occlusion. This was absolutely amazing because these are not gloves. No, no. This is the reconstruction of the hand by the algorithm. Absolutely amazing. However, it is slow, and mingling with other objects is still quite limited. So, what is missing? What is left to be done here? Let's have a look at today's paper and find out together. This is its output. Yes, mingling that looks very natural. But, what is so interesting here? The interesting part is that it has realistic finger movements. Well, that means that it just reads the data from the sensors on the fingers, right? Now, hold on to your papers and we'll find out once we look at the input. Oh my, is this really true? No sensors on the fingers anywhere. What kind of black magic is this? And with that, we can now make the most important observation in the paper, and that is that it reads information from only the wrist and the objects in the hand. Look, the sensors are on the gloves, but none are on the fingers. Once again, the sensors have no idea what we are doing with our fingers. It only reads the movement of our wrist and the object, and all the finger movement is synthesized by it automatically. Whoa! And with this, we can not only have a virtual version of our hand, but we can also manipulate virtual objects with very few sensor readings. The rest is up to the AI to synthesize. This means that we can have a drink with a friend online, use a virtual hammer, too, depending on our mood, fix, or destroy virtual objects. This is very challenging because the finger movements have to follow the geometry of the object. Look, here, the same hand is holding different objects, and the AI knows how to synthesize the appropriate finger movements for both of them. This is especially apparent when we change the scale of the object. You see, the small one requires small and precise finger movements to turn around. These are motions that need to be completely resynthesized for bigger objects. So cool. And now comes the key. So does this only work on objects that it has been trained on? No, not at all. For instance, the method has not seen this kind of teapot before, and still it knows how to use its handle, and to hold it from the bottom, too, even if both of these parts look different. Be careful, though, who knows, maybe virtual teapots can get hot, too. Once more, it handles the independent movement of the left and right hands. Now, how fast is all this? Can we have coffee together in virtual reality? Yes, absolutely. All this runs in close to real time. There is a tiny bit of delay, though, but a result like this is already amazing, and this is typically the kind of thing that can be fixed one more paper down the line. However, not even this technique is perfect, it might still miss small features on an object, for instance, a very thin handle might confuse it. Or if it has an inaccurate reading of the hand pose and distances, this might happen. But for now, having a virtual coffee together, yes, please, sign me up. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zona Ifehir."}, {"start": 4.76, "end": 14.56, "text": " Today, we are going to see how we can use our hands, but not our fingers to mingle with objects in virtual worlds."}, {"start": 14.56, "end": 19.6, "text": " The promise of virtual reality VR is indeed truly incredible."}, {"start": 19.6, "end": 26.44, "text": " If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment,"}, {"start": 26.44, "end": 34.92, "text": " expose astronauts to virtual zero-gravity simulations, work together with telepresence applications, you name it."}, {"start": 34.92, "end": 40.52, "text": " The dream is getting closer and closer, but something is still missing."}, {"start": 40.52, "end": 51.52, "text": " For instance, this previous work uses a learning-based algorithm to teach a head-mounted camera to tell the orientation of our hands at all times."}, {"start": 51.52, "end": 64.68, "text": " One more paper down the line, this technique appeared that can deal with examples with challenging hand-hand interactions, deformations, lots of self-contact, and self-occlusion."}, {"start": 64.68, "end": 69.48, "text": " This was absolutely amazing because these are not gloves."}, {"start": 69.48, "end": 70.72, "text": " No, no."}, {"start": 70.72, "end": 75.16, "text": " This is the reconstruction of the hand by the algorithm."}, {"start": 75.16, "end": 77.12, "text": " Absolutely amazing."}, {"start": 77.12, "end": 83.68, "text": " However, it is slow, and mingling with other objects is still quite limited."}, {"start": 83.68, "end": 85.84, "text": " So, what is missing?"}, {"start": 85.84, "end": 87.84, "text": " What is left to be done here?"}, {"start": 87.84, "end": 91.92, "text": " Let's have a look at today's paper and find out together."}, {"start": 91.92, "end": 93.84, "text": " This is its output."}, {"start": 93.84, "end": 97.68, "text": " Yes, mingling that looks very natural."}, {"start": 97.68, "end": 100.32000000000001, "text": " But, what is so interesting here?"}, {"start": 100.32000000000001, "end": 104.64, "text": " The interesting part is that it has realistic finger movements."}, {"start": 104.64, "end": 110.8, "text": " Well, that means that it just reads the data from the sensors on the fingers, right?"}, {"start": 110.8, "end": 116.72, "text": " Now, hold on to your papers and we'll find out once we look at the input."}, {"start": 116.72, "end": 119.92, "text": " Oh my, is this really true?"}, {"start": 119.92, "end": 123.2, "text": " No sensors on the fingers anywhere."}, {"start": 123.2, "end": 125.88, "text": " What kind of black magic is this?"}, {"start": 125.88, "end": 131.04, "text": " And with that, we can now make the most important observation in the paper,"}, {"start": 131.04, "end": 137.92, "text": " and that is that it reads information from only the wrist and the objects in the hand."}, {"start": 137.92, "end": 143.35999999999999, "text": " Look, the sensors are on the gloves, but none are on the fingers."}, {"start": 143.35999999999999, "end": 147.84, "text": " Once again, the sensors have no idea what we are doing with our fingers."}, {"start": 147.84, "end": 153.72, "text": " It only reads the movement of our wrist and the object, and all the finger movement is"}, {"start": 153.72, "end": 156.76, "text": " synthesized by it automatically."}, {"start": 156.76, "end": 159.0, "text": " Whoa!"}, {"start": 159.0, "end": 165.16, "text": " And with this, we can not only have a virtual version of our hand, but we can also manipulate"}, {"start": 165.16, "end": 169.12, "text": " virtual objects with very few sensor readings."}, {"start": 169.12, "end": 172.4, "text": " The rest is up to the AI to synthesize."}, {"start": 172.4, "end": 178.56, "text": " This means that we can have a drink with a friend online, use a virtual hammer, too, depending"}, {"start": 178.56, "end": 183.28, "text": " on our mood, fix, or destroy virtual objects."}, {"start": 183.28, "end": 188.56, "text": " This is very challenging because the finger movements have to follow the geometry of the"}, {"start": 188.56, "end": 189.56, "text": " object."}, {"start": 189.56, "end": 196.72, "text": " Look, here, the same hand is holding different objects, and the AI knows how to synthesize"}, {"start": 196.72, "end": 200.12, "text": " the appropriate finger movements for both of them."}, {"start": 200.12, "end": 204.8, "text": " This is especially apparent when we change the scale of the object."}, {"start": 204.8, "end": 211.12, "text": " You see, the small one requires small and precise finger movements to turn around."}, {"start": 211.12, "end": 216.4, "text": " These are motions that need to be completely resynthesized for bigger objects."}, {"start": 216.4, "end": 218.16, "text": " So cool."}, {"start": 218.16, "end": 220.07999999999998, "text": " And now comes the key."}, {"start": 220.07999999999998, "end": 224.24, "text": " So does this only work on objects that it has been trained on?"}, {"start": 224.24, "end": 226.4, "text": " No, not at all."}, {"start": 226.4, "end": 231.72, "text": " For instance, the method has not seen this kind of teapot before, and still it knows how"}, {"start": 231.72, "end": 237.88, "text": " to use its handle, and to hold it from the bottom, too, even if both of these parts look"}, {"start": 237.88, "end": 239.4, "text": " different."}, {"start": 239.4, "end": 244.84, "text": " Be careful, though, who knows, maybe virtual teapots can get hot, too."}, {"start": 244.84, "end": 249.24, "text": " Once more, it handles the independent movement of the left and right hands."}, {"start": 249.24, "end": 252.88, "text": " Now, how fast is all this?"}, {"start": 252.88, "end": 255.88, "text": " Can we have coffee together in virtual reality?"}, {"start": 255.88, "end": 258.2, "text": " Yes, absolutely."}, {"start": 258.2, "end": 261.28000000000003, "text": " All this runs in close to real time."}, {"start": 261.28000000000003, "end": 267.4, "text": " There is a tiny bit of delay, though, but a result like this is already amazing, and"}, {"start": 267.4, "end": 272.32, "text": " this is typically the kind of thing that can be fixed one more paper down the line."}, {"start": 272.32, "end": 278.44, "text": " However, not even this technique is perfect, it might still miss small features on an object,"}, {"start": 278.44, "end": 282.68, "text": " for instance, a very thin handle might confuse it."}, {"start": 282.68, "end": 289.2, "text": " Or if it has an inaccurate reading of the hand pose and distances, this might happen."}, {"start": 289.2, "end": 296.03999999999996, "text": " But for now, having a virtual coffee together, yes, please, sign me up."}, {"start": 296.03999999999996, "end": 299.48, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 299.48, "end": 305.44, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 305.44, "end": 313.20000000000005, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto"}, {"start": 313.20000000000005, "end": 319.84000000000003, "text": " your papers because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 319.84000000000003, "end": 325.36, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 325.36, "end": 331.76, "text": " And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 331.76, "end": 333.56, "text": " workstations or servers."}, {"start": 333.56, "end": 339.52000000000004, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 339.52000000000004, "end": 340.52000000000004, "text": " today."}, {"start": 340.52000000000004, "end": 345.0, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 345.0, "end": 346.0, "text": " for you."}, {"start": 346.0, "end": 373.64, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UrB-tqA8oeg
This AI Helps Making A Music Video! 💃
❤️ Train a neural network and track your experiments with Weights & Biases here: http://wandb.me/paperintro 📝 The paper Editable Free-Viewpoint Video using a Layered Neural Representation"" is available here: https://jiakai-zhang.github.io/st-nerf/ https://github.com/DarlingHang/st-nerf 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-882940/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #aimusicvideo
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jona Ifeher. Today we are going to make some crazy, synthetic music videos. In machine learning research, view synthesis papers are under eyes these days. These techniques are also referred to as Nerf variants, which is a learning based algorithm that tries to reproduce real-world scenes from only a few views. It is very challenging, look, in-go, a bunch of photos of a scene, and the method has to be able to synthesize new photorealistic images between these photos. But this is not the only paper in this area. Researchers are very aware of the potential here, and thus, a great number of Nerf variants are appearing every month. For instance, here is a recent one that extends the original technique to handle shiny and reflective objects better. So, what else is there to do here? Well, look here. This new one demands not a bunch of photos from just one camera, no, no, but from 16 different cameras. That's a big ask. But, in return, the method now has tons of information about the geometry and the movement of these test subjects, so is it intelligent enough to make something useful out of it? Now, believe it or not, this in return can not only help us look around in the scene, but even edit it in three new ways. For instance, one, we can change the scale of these subjects, add and remove them from the scene, and even copy-paste them. Excellent for creating music videos. Well, talking about music videos, do you know what is even more excellent for those, retiming movements? That is also possible. This can, for instance, improve an OK dancing performance into an excellent one. And, three, because now we are in charge of the final footage, if the original footage is shaky, well, we can choose to eliminate that camera shake. Game changer. It's not quite the hardware requirement where you just whip out your smartphone and start nerfing and editing, but for what it can do, it really does not ask for a lot, especially given that if we wish to, we can even remove some of these cameras and still expect reasonable results. We lose roughly a decibel of signal per camera. Here is what that looks like. Not too shabby. And, all this progress just one more paper down the line. And, I like the idea behind this paper a great deal, because typically, what we are looking for in a follow-up paper is trying to achieve similar results while asking for less data from the user. This paper goes into the exact other direction and asks what amazing things could be done if we had more data instead. Loving it. We do not have to worry about that, not only mirror view synthesis, but mirror scene editing is also possible. What a time to be alive. This video has been supported by weights and biases. Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data. But, I am not looking for data, I am looking for insights. And, weights and biases helps with exactly that. We have tools for experiment tracking, data set and model versioning, and even hyper parameter optimization. No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung, and many more prestigious labs. Make sure to use the link WNB.ME slash paper intro, or just click the link in the video description, and try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments. After you try it, you won't want to go back. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jona Ifeher."}, {"start": 5.0, "end": 10.0, "text": " Today we are going to make some crazy, synthetic music videos."}, {"start": 10.0, "end": 15.0, "text": " In machine learning research, view synthesis papers are under eyes these days."}, {"start": 15.0, "end": 19.0, "text": " These techniques are also referred to as Nerf variants,"}, {"start": 19.0, "end": 26.0, "text": " which is a learning based algorithm that tries to reproduce real-world scenes from only a few views."}, {"start": 26.0, "end": 32.0, "text": " It is very challenging, look, in-go, a bunch of photos of a scene,"}, {"start": 32.0, "end": 39.0, "text": " and the method has to be able to synthesize new photorealistic images between these photos."}, {"start": 39.0, "end": 42.0, "text": " But this is not the only paper in this area."}, {"start": 42.0, "end": 45.0, "text": " Researchers are very aware of the potential here,"}, {"start": 45.0, "end": 50.0, "text": " and thus, a great number of Nerf variants are appearing every month."}, {"start": 50.0, "end": 59.0, "text": " For instance, here is a recent one that extends the original technique to handle shiny and reflective objects better."}, {"start": 59.0, "end": 62.0, "text": " So, what else is there to do here?"}, {"start": 62.0, "end": 68.0, "text": " Well, look here. This new one demands not a bunch of photos from just one camera,"}, {"start": 68.0, "end": 74.0, "text": " no, no, but from 16 different cameras. That's a big ask."}, {"start": 74.0, "end": 82.0, "text": " But, in return, the method now has tons of information about the geometry and the movement of these test subjects,"}, {"start": 82.0, "end": 87.0, "text": " so is it intelligent enough to make something useful out of it?"}, {"start": 87.0, "end": 94.0, "text": " Now, believe it or not, this in return can not only help us look around in the scene,"}, {"start": 94.0, "end": 97.0, "text": " but even edit it in three new ways."}, {"start": 97.0, "end": 106.0, "text": " For instance, one, we can change the scale of these subjects,"}, {"start": 106.0, "end": 111.0, "text": " add and remove them from the scene, and even copy-paste them."}, {"start": 111.0, "end": 114.0, "text": " Excellent for creating music videos."}, {"start": 114.0, "end": 120.0, "text": " Well, talking about music videos, do you know what is even more excellent for those,"}, {"start": 120.0, "end": 124.0, "text": " retiming movements? That is also possible."}, {"start": 124.0, "end": 133.0, "text": " This can, for instance, improve an OK dancing performance into an excellent one."}, {"start": 133.0, "end": 138.0, "text": " And, three, because now we are in charge of the final footage,"}, {"start": 138.0, "end": 145.0, "text": " if the original footage is shaky, well, we can choose to eliminate that camera shake."}, {"start": 145.0, "end": 147.0, "text": " Game changer."}, {"start": 147.0, "end": 151.0, "text": " It's not quite the hardware requirement where you just whip out your smartphone"}, {"start": 151.0, "end": 158.0, "text": " and start nerfing and editing, but for what it can do, it really does not ask for a lot,"}, {"start": 158.0, "end": 163.0, "text": " especially given that if we wish to, we can even remove some of these cameras"}, {"start": 163.0, "end": 166.0, "text": " and still expect reasonable results."}, {"start": 166.0, "end": 170.0, "text": " We lose roughly a decibel of signal per camera."}, {"start": 170.0, "end": 173.0, "text": " Here is what that looks like."}, {"start": 173.0, "end": 175.0, "text": " Not too shabby."}, {"start": 175.0, "end": 179.0, "text": " And, all this progress just one more paper down the line."}, {"start": 179.0, "end": 184.0, "text": " And, I like the idea behind this paper a great deal, because typically,"}, {"start": 184.0, "end": 189.0, "text": " what we are looking for in a follow-up paper is trying to achieve similar results"}, {"start": 189.0, "end": 193.0, "text": " while asking for less data from the user."}, {"start": 193.0, "end": 200.0, "text": " This paper goes into the exact other direction and asks what amazing things could be done"}, {"start": 200.0, "end": 203.0, "text": " if we had more data instead."}, {"start": 203.0, "end": 205.0, "text": " Loving it."}, {"start": 205.0, "end": 212.0, "text": " We do not have to worry about that, not only mirror view synthesis, but mirror scene editing is also possible."}, {"start": 212.0, "end": 214.0, "text": " What a time to be alive."}, {"start": 214.0, "end": 217.0, "text": " This video has been supported by weights and biases."}, {"start": 217.0, "end": 225.0, "text": " Being a machine learning researcher means doing tons of experiments and, of course, creating tons of data."}, {"start": 225.0, "end": 229.0, "text": " But, I am not looking for data, I am looking for insights."}, {"start": 229.0, "end": 233.0, "text": " And, weights and biases helps with exactly that."}, {"start": 233.0, "end": 241.0, "text": " We have tools for experiment tracking, data set and model versioning, and even hyper parameter optimization."}, {"start": 241.0, "end": 250.0, "text": " No wonder this is the experiment tracking tool choice of open AI, Toyota Research, Samsung, and many more prestigious labs."}, {"start": 250.0, "end": 258.0, "text": " Make sure to use the link WNB.ME slash paper intro, or just click the link in the video description,"}, {"start": 258.0, "end": 270.0, "text": " and try this 10 minute example of weights and biases today to experience the wonderful feeling of training a neural network and being in control of your experiments."}, {"start": 270.0, "end": 272.0, "text": " After you try it, you won't want to go back."}, {"start": 272.0, "end": 299.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=SsJ_AusntiU
This AI Learned Boxing…With Serious Knockout Power! 🥊
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 📝 The paper "Control Strategies for Physically Simulated Characters Performing Two-player Competitive Sports" is available here: https://research.fb.com/publications/control-strategies-for-physically-simulated-characters-performing-two-player-competitive-sports/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background design: Felícia Zsolnai-Fehér - http://felicia.hu 00:00 Intro - You shall not pass! 00:49 Does nothing - still wins! 01:30 Boxing - but not so well 02:13 Learning is happening 02:39 After 250 million training steps 03:10 Drunkards no more! 03:29 Serious knockout power! 04:00 It works for fencing too 04:20 First Law of Papers 04:43 An important lesson Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jean-Eiffahir. Today we are going to see an AI learn boxing and even mimic gorillas during this process. Now, in an earlier work, we saw a few examples of AI agents playing two player sports. For instance, this is the You Shall Not Pass game where the red agent is trying to hold back the blue character and not let it cross the line. Here you see two regular AI's dooking it out, sometimes the red wins, sometimes the blue is able to get through. Nothing too crazy here. Until this happens. Look, what is happening? It seems that this agent started to do nothing and still won. Not only that, but it suddenly started winning almost all the games. How is this even possible? Well, what the agent did is perhaps the AI equivalent of hypnotizing the opponent if you will. The more rigorous term for this is that it induces of distribution activations in its opponent. This adversarial agent is really doing nothing, but that's not enough. It is doing nothing in a way that reprograms its opponent to make mistakes and behave close to a completely randomly acting agent. Now, this no paper showcases AI agents that can learn boxing. The AI is asked to control these joint-actuated characters which are embedded in a physics simulation. Well, that is quite a challenge. Look, for quite a while, after 130 million steps of training, it cannot even hold it together. And yes, these folks collapse, but this is not the good kind of hypnotic adversarial collapsing. I am afraid this is just passing out without any particular benefits. That was quite a bit of training and all this for nearly nothing. Right? Well, maybe. Let's see what they did after 200 million training steps. Look, they can not only hold it together, but they have a little footwork going on and can circle each other and try to take the middle of the ring. Improvements. Good. But this is not dancing practice. This is boxing. I would like to see some boxing today and it doesn't seem to happen. Until we wait for a little longer, which is 250 million training steps. Now, is this boxing? Not quite. This is more like two drunkards trying to do it out when neither of them knows how to throw a real punch, but their gloves are starting to touch the opponent and they start getting rewards for it. What does that mean for an intelligent agent? Well, it means that over time it will learn to do that a little better. And hold on to your papers and see what they do after 420 million steps. Oh, wow. Look at that. I am seeing some punches and not only that, but I also see somebody and head movement to evade the punches. Very cool. And if we keep going for longer, Whoa. These guys can fight. They now learned to perform feints, jabs, and have some proper knockout power too. And if you have been holding onto your papers, now squeeze that paper because all they looked at before starting the training was 90 seconds of motion capture data. And this is a general framework that also works for fencing as well. Look, the agents learn to lunge, deflect, evade attacks, and more. Absolutely amazing. What a time to be alive. So this was approximately a billion training steps, right? So how long did that take to compute? It took approximately a week. And you know what's coming? Of course, we invoke the first law of papers which says that research is a process. Do not look at where we are. Look at where we will be two more papers down the line. And two more papers down the line. I bet this will be possible in a matter of hours. And this is the part with the gorillas. It is also interesting that even though there were plenty of reasons to, the researchers didn't quit after 130 million steps. They just kept on going and eventually succeeded. Especially in the presence of not so trivial training curves where the blocking of the other player can worsen the performance and it's often not as easy to tell where we are. That is a great life lesson right there. PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jean-Eiffahir."}, {"start": 5.0, "end": 14.0, "text": " Today we are going to see an AI learn boxing and even mimic gorillas during this process."}, {"start": 14.0, "end": 21.0, "text": " Now, in an earlier work, we saw a few examples of AI agents playing two player sports."}, {"start": 21.0, "end": 28.0, "text": " For instance, this is the You Shall Not Pass game where the red agent is trying to hold back the blue character"}, {"start": 28.0, "end": 31.0, "text": " and not let it cross the line."}, {"start": 31.0, "end": 39.0, "text": " Here you see two regular AI's dooking it out, sometimes the red wins, sometimes the blue is able to get through."}, {"start": 39.0, "end": 42.0, "text": " Nothing too crazy here."}, {"start": 42.0, "end": 45.0, "text": " Until this happens."}, {"start": 45.0, "end": 48.0, "text": " Look, what is happening?"}, {"start": 48.0, "end": 53.0, "text": " It seems that this agent started to do nothing and still won."}, {"start": 53.0, "end": 58.0, "text": " Not only that, but it suddenly started winning almost all the games."}, {"start": 58.0, "end": 61.0, "text": " How is this even possible?"}, {"start": 61.0, "end": 67.0, "text": " Well, what the agent did is perhaps the AI equivalent of hypnotizing the opponent if you will."}, {"start": 67.0, "end": 74.0, "text": " The more rigorous term for this is that it induces of distribution activations in its opponent."}, {"start": 74.0, "end": 79.0, "text": " This adversarial agent is really doing nothing, but that's not enough."}, {"start": 79.0, "end": 89.0, "text": " It is doing nothing in a way that reprograms its opponent to make mistakes and behave close to a completely randomly acting agent."}, {"start": 89.0, "end": 94.0, "text": " Now, this no paper showcases AI agents that can learn boxing."}, {"start": 94.0, "end": 102.0, "text": " The AI is asked to control these joint-actuated characters which are embedded in a physics simulation."}, {"start": 102.0, "end": 112.0, "text": " Well, that is quite a challenge. Look, for quite a while, after 130 million steps of training, it cannot even hold it together."}, {"start": 112.0, "end": 120.0, "text": " And yes, these folks collapse, but this is not the good kind of hypnotic adversarial collapsing."}, {"start": 120.0, "end": 126.0, "text": " I am afraid this is just passing out without any particular benefits."}, {"start": 126.0, "end": 131.0, "text": " That was quite a bit of training and all this for nearly nothing."}, {"start": 131.0, "end": 138.0, "text": " Right? Well, maybe. Let's see what they did after 200 million training steps."}, {"start": 138.0, "end": 148.0, "text": " Look, they can not only hold it together, but they have a little footwork going on and can circle each other and try to take the middle of the ring."}, {"start": 148.0, "end": 154.0, "text": " Improvements. Good. But this is not dancing practice. This is boxing."}, {"start": 154.0, "end": 159.0, "text": " I would like to see some boxing today and it doesn't seem to happen."}, {"start": 159.0, "end": 166.0, "text": " Until we wait for a little longer, which is 250 million training steps."}, {"start": 166.0, "end": 176.0, "text": " Now, is this boxing? Not quite. This is more like two drunkards trying to do it out when neither of them knows how to throw a real punch,"}, {"start": 176.0, "end": 182.0, "text": " but their gloves are starting to touch the opponent and they start getting rewards for it."}, {"start": 182.0, "end": 191.0, "text": " What does that mean for an intelligent agent? Well, it means that over time it will learn to do that a little better."}, {"start": 191.0, "end": 197.0, "text": " And hold on to your papers and see what they do after 420 million steps."}, {"start": 197.0, "end": 207.0, "text": " Oh, wow. Look at that. I am seeing some punches and not only that, but I also see somebody and head movement to evade the punches."}, {"start": 207.0, "end": 214.0, "text": " Very cool. And if we keep going for longer,"}, {"start": 214.0, "end": 225.0, "text": " Whoa. These guys can fight. They now learned to perform feints, jabs, and have some proper knockout power too."}, {"start": 225.0, "end": 236.0, "text": " And if you have been holding onto your papers, now squeeze that paper because all they looked at before starting the training was 90 seconds of motion capture data."}, {"start": 236.0, "end": 242.0, "text": " And this is a general framework that also works for fencing as well."}, {"start": 242.0, "end": 248.0, "text": " Look, the agents learn to lunge, deflect, evade attacks, and more."}, {"start": 248.0, "end": 257.0, "text": " Absolutely amazing. What a time to be alive. So this was approximately a billion training steps, right?"}, {"start": 257.0, "end": 263.0, "text": " So how long did that take to compute? It took approximately a week."}, {"start": 263.0, "end": 271.0, "text": " And you know what's coming? Of course, we invoke the first law of papers which says that research is a process."}, {"start": 271.0, "end": 277.0, "text": " Do not look at where we are. Look at where we will be two more papers down the line."}, {"start": 277.0, "end": 283.0, "text": " And two more papers down the line. I bet this will be possible in a matter of hours."}, {"start": 283.0, "end": 290.0, "text": " And this is the part with the gorillas. It is also interesting that even though there were plenty of reasons to,"}, {"start": 290.0, "end": 299.0, "text": " the researchers didn't quit after 130 million steps. They just kept on going and eventually succeeded."}, {"start": 299.0, "end": 310.0, "text": " Especially in the presence of not so trivial training curves where the blocking of the other player can worsen the performance and it's often not as easy to tell where we are."}, {"start": 310.0, "end": 313.0, "text": " That is a great life lesson right there."}, {"start": 313.0, "end": 321.0, "text": " PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible."}, {"start": 321.0, "end": 330.0, "text": " This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it."}, {"start": 330.0, "end": 335.0, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 335.0, "end": 345.0, "text": " It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically."}, {"start": 345.0, "end": 351.0, "text": " I only wish I had a tool like this when I was working on my neural networks during my PhD years."}, {"start": 351.0, "end": 358.0, "text": " Visit perceptiLabs.com slash papers to easily install the free local version of their system today."}, {"start": 358.0, "end": 367.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=lCBSGOwV-_o
This Magical AI Cuts People Out Of Your Videos! ✂️
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned report is available here: https://wandb.ai/_scott/omnimatte/reports/Omnimatte-Associating-Objects-and-Their-Effects--Vmlldzo5MDQxNTc 📝 The paper "Omnimatte: Associating Objects and Their Effects in Video" is available here: https://omnimatte.github.io/ Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Thumbnail background image credit: https://pixabay.com/images/id-4762800/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to see magical things that open up when we are able to automatically find the foreground and the background of a video. Let's see why that matters. This new technique leans on a previous method to find the boy and the dog. Let's call this level 1 segmentation. So far so good, but this is not the state of the art. Yet, now comes level 2. It also found the shadow of the boy and the shadow of the dog. Now we are talking, but it doesn't stop there. It gets even better. Level 3, this is where things get out of hand. Look, the dog is occluding the boy's shadow and it is able to deal with that too. So if we can identify all of the effects that are attached to the boy and the dog, what can we do with all this information? Well, for instance, we can even remove them from the video. Nothing to see here. Now, a common problem is that still the silhouette of the subject still remains in the final footage, so let's take a close look together. I don't see anything at all. Wow! Let me know in the comments below. Just to showcase how good this removal is, here is a good technique from just one year ago. Do you see it? This requires the shadows to be found manually, so we have to work with that. And still, in the outputs you can see the silhouette we mentioned. And how much better is the new method? Well, it finds the shadows automatically that is already mind-blowing and the outputs are… Yes, much cleaner. Not perfect, there is still some silhouette action, but if I were not actively looking for it, I might not have noticed it. It can also remove people from this trampoline scene and not only the bodies, but it also removes their effect on the trampolines as well. Wow! And as this method can perform all this reliably, it opens up the possibility for new, magical effects. For instance, we can duplicate this test subject and even feed it in and out. Note that it has found its shadows as well. Excellent! So, it can deal with finding not only the shape of the boy and the dog, and it knows that it's not enough to just find their silhouettes, but it also has to find additional effects they have on the footage. For instance, their shadows. That is wonderful, and what is even more wonderful is that this was only one of the simpler things it could do. Those are not the only potential correlated effects, look, a previous method was able to find this one here, but it's not enough to remove it because it also has additional effects on the scene. What are those? Well, look, it has reflections, and it creates ripples too. This is so much more difficult than just finding shadows. And now, let's see the new method. And it knows about the reflections and ripples, finds both of them and gives us this beautifully clean result. Nothing to see here. And also, look at this elephant. Removing just the silhouette of the elephant is not enough, it also has to find all the dust around it, and it gets worse, the dust is changing rapidly over time. And believe it or not, wow, it can find a dust tool and remove the elephant. Again, nothing to see here. And if you think that this dust was the new algorithm at its best, then have a look at this drifting car. Previous method? Yes, that is the car, but you know what I want. I want the smoke gone too. So that's probably impossible, right? Well, let's have a look. Wow, I can't believe it. It grabbed and removed the car and the smoke together, and once again, nothing to see here. So what are those more magical things that this opens up? Watch carefully. It can make the colors pop here. And remember, it can find the reflections of the flamingo, so it keeps not only the flamingo, but the reflection of the flamingo in color as well. Absolutely amazing. And if we can find the background of the video, we can even change the background. This works even in the presence of a moving camera, which is a challenging problem. Now of course, not even this technique is perfect. Look here. The reflections are copied off of the previous scene and it shows on the new one. So, what do you think? What would you use this technique for? Let me know in the comments or if you wish to discuss similar topics with other fellow scholars in a warm and welcoming environment, make sure to join our Discord channel. Also, I would like to send a big thank you to the mods and everyone who helps running this community. The link to the server is available in the video description. You are invited. What you see here is a report of this exact paper we have talked about, which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out Wades and Biasis. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wades and Biasis for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.76, "end": 11.16, "text": " Today, we are going to see magical things that open up when we are able to automatically"}, {"start": 11.16, "end": 15.64, "text": " find the foreground and the background of a video."}, {"start": 15.64, "end": 18.48, "text": " Let's see why that matters."}, {"start": 18.48, "end": 23.72, "text": " This new technique leans on a previous method to find the boy and the dog."}, {"start": 23.72, "end": 27.36, "text": " Let's call this level 1 segmentation."}, {"start": 27.36, "end": 31.68, "text": " So far so good, but this is not the state of the art."}, {"start": 31.68, "end": 34.4, "text": " Yet, now comes level 2."}, {"start": 34.4, "end": 39.92, "text": " It also found the shadow of the boy and the shadow of the dog."}, {"start": 39.92, "end": 43.4, "text": " Now we are talking, but it doesn't stop there."}, {"start": 43.4, "end": 45.68, "text": " It gets even better."}, {"start": 45.68, "end": 49.36, "text": " Level 3, this is where things get out of hand."}, {"start": 49.36, "end": 55.879999999999995, "text": " Look, the dog is occluding the boy's shadow and it is able to deal with that too."}, {"start": 55.88, "end": 62.080000000000005, "text": " So if we can identify all of the effects that are attached to the boy and the dog, what"}, {"start": 62.080000000000005, "end": 64.72, "text": " can we do with all this information?"}, {"start": 64.72, "end": 70.04, "text": " Well, for instance, we can even remove them from the video."}, {"start": 70.04, "end": 71.32000000000001, "text": " Nothing to see here."}, {"start": 71.32000000000001, "end": 77.28, "text": " Now, a common problem is that still the silhouette of the subject still remains in the final"}, {"start": 77.28, "end": 81.80000000000001, "text": " footage, so let's take a close look together."}, {"start": 81.80000000000001, "end": 84.44, "text": " I don't see anything at all."}, {"start": 84.44, "end": 86.44, "text": " Wow!"}, {"start": 86.44, "end": 89.44, "text": " Let me know in the comments below."}, {"start": 89.44, "end": 94.96, "text": " Just to showcase how good this removal is, here is a good technique from just one year"}, {"start": 94.96, "end": 95.96, "text": " ago."}, {"start": 95.96, "end": 97.72, "text": " Do you see it?"}, {"start": 97.72, "end": 103.92, "text": " This requires the shadows to be found manually, so we have to work with that."}, {"start": 103.92, "end": 109.8, "text": " And still, in the outputs you can see the silhouette we mentioned."}, {"start": 109.8, "end": 112.47999999999999, "text": " And how much better is the new method?"}, {"start": 112.48, "end": 119.28, "text": " Well, it finds the shadows automatically that is already mind-blowing and the outputs"}, {"start": 119.28, "end": 120.28, "text": " are\u2026"}, {"start": 120.28, "end": 123.28, "text": " Yes, much cleaner."}, {"start": 123.28, "end": 128.28, "text": " Not perfect, there is still some silhouette action, but if I were not actively looking"}, {"start": 128.28, "end": 132.04, "text": " for it, I might not have noticed it."}, {"start": 132.04, "end": 137.84, "text": " It can also remove people from this trampoline scene and not only the bodies, but it also"}, {"start": 137.84, "end": 142.84, "text": " removes their effect on the trampolines as well."}, {"start": 142.84, "end": 144.28, "text": " Wow!"}, {"start": 144.28, "end": 150.84, "text": " And as this method can perform all this reliably, it opens up the possibility for new, magical"}, {"start": 150.84, "end": 151.84, "text": " effects."}, {"start": 151.84, "end": 158.56, "text": " For instance, we can duplicate this test subject and even feed it in and out."}, {"start": 158.56, "end": 162.32, "text": " Note that it has found its shadows as well."}, {"start": 162.32, "end": 163.32, "text": " Excellent!"}, {"start": 163.32, "end": 169.32, "text": " So, it can deal with finding not only the shape of the boy and the dog, and it knows that"}, {"start": 169.32, "end": 175.76, "text": " it's not enough to just find their silhouettes, but it also has to find additional effects they"}, {"start": 175.76, "end": 177.76, "text": " have on the footage."}, {"start": 177.76, "end": 180.12, "text": " For instance, their shadows."}, {"start": 180.12, "end": 186.12, "text": " That is wonderful, and what is even more wonderful is that this was only one of the simpler things"}, {"start": 186.12, "end": 187.92, "text": " it could do."}, {"start": 187.92, "end": 193.35999999999999, "text": " Those are not the only potential correlated effects, look, a previous method was able to"}, {"start": 193.35999999999999, "end": 198.32, "text": " find this one here, but it's not enough to remove it because it also has additional"}, {"start": 198.32, "end": 200.92, "text": " effects on the scene."}, {"start": 200.92, "end": 201.92, "text": " What are those?"}, {"start": 201.92, "end": 208.39999999999998, "text": " Well, look, it has reflections, and it creates ripples too."}, {"start": 208.39999999999998, "end": 212.6, "text": " This is so much more difficult than just finding shadows."}, {"start": 212.6, "end": 215.11999999999998, "text": " And now, let's see the new method."}, {"start": 215.12, "end": 224.28, "text": " And it knows about the reflections and ripples, finds both of them and gives us this beautifully"}, {"start": 224.28, "end": 226.32, "text": " clean result."}, {"start": 226.32, "end": 228.16, "text": " Nothing to see here."}, {"start": 228.16, "end": 231.72, "text": " And also, look at this elephant."}, {"start": 231.72, "end": 236.8, "text": " Removing just the silhouette of the elephant is not enough, it also has to find all the dust"}, {"start": 236.8, "end": 243.6, "text": " around it, and it gets worse, the dust is changing rapidly over time."}, {"start": 243.6, "end": 250.56, "text": " And believe it or not, wow, it can find a dust tool and remove the elephant."}, {"start": 250.56, "end": 253.88, "text": " Again, nothing to see here."}, {"start": 253.88, "end": 259.28, "text": " And if you think that this dust was the new algorithm at its best, then have a look at"}, {"start": 259.28, "end": 261.88, "text": " this drifting car."}, {"start": 261.88, "end": 262.88, "text": " Previous method?"}, {"start": 262.88, "end": 267.4, "text": " Yes, that is the car, but you know what I want."}, {"start": 267.4, "end": 270.24, "text": " I want the smoke gone too."}, {"start": 270.24, "end": 273.84000000000003, "text": " So that's probably impossible, right?"}, {"start": 273.84000000000003, "end": 276.52, "text": " Well, let's have a look."}, {"start": 276.52, "end": 280.24, "text": " Wow, I can't believe it."}, {"start": 280.24, "end": 287.8, "text": " It grabbed and removed the car and the smoke together, and once again, nothing to see here."}, {"start": 287.8, "end": 292.16, "text": " So what are those more magical things that this opens up?"}, {"start": 292.16, "end": 293.44, "text": " Watch carefully."}, {"start": 293.44, "end": 297.0, "text": " It can make the colors pop here."}, {"start": 297.0, "end": 304.0, "text": " And remember, it can find the reflections of the flamingo, so it keeps not only the flamingo,"}, {"start": 304.0, "end": 309.0, "text": " but the reflection of the flamingo in color as well."}, {"start": 309.0, "end": 310.8, "text": " Absolutely amazing."}, {"start": 310.8, "end": 316.56, "text": " And if we can find the background of the video, we can even change the background."}, {"start": 316.56, "end": 322.2, "text": " This works even in the presence of a moving camera, which is a challenging problem."}, {"start": 322.2, "end": 325.6, "text": " Now of course, not even this technique is perfect."}, {"start": 325.6, "end": 326.92, "text": " Look here."}, {"start": 326.92, "end": 332.36, "text": " The reflections are copied off of the previous scene and it shows on the new one."}, {"start": 332.36, "end": 335.16, "text": " So, what do you think?"}, {"start": 335.16, "end": 337.36, "text": " What would you use this technique for?"}, {"start": 337.36, "end": 342.24, "text": " Let me know in the comments or if you wish to discuss similar topics with other fellow"}, {"start": 342.24, "end": 347.52000000000004, "text": " scholars in a warm and welcoming environment, make sure to join our Discord channel."}, {"start": 347.52000000000004, "end": 353.16, "text": " Also, I would like to send a big thank you to the mods and everyone who helps running"}, {"start": 353.16, "end": 354.48, "text": " this community."}, {"start": 354.48, "end": 357.96000000000004, "text": " The link to the server is available in the video description."}, {"start": 357.96000000000004, "end": 359.52000000000004, "text": " You are invited."}, {"start": 359.52000000000004, "end": 364.04, "text": " What you see here is a report of this exact paper we have talked about, which was made"}, {"start": 364.04, "end": 365.72, "text": " by Wades and Biasis."}, {"start": 365.72, "end": 367.92, "text": " I put a link to it in the description."}, {"start": 367.92, "end": 368.92, "text": " Make sure to have a look."}, {"start": 368.92, "end": 372.52000000000004, "text": " I think it helps you understand this paper better."}, {"start": 372.52000000000004, "end": 378.28000000000003, "text": " If you work with learning algorithms on a regular basis, make sure to check out Wades and Biasis."}, {"start": 378.28000000000003, "end": 383.20000000000005, "text": " Their system is designed to help you organize your experiments and it is so good it could"}, {"start": 383.2, "end": 388.47999999999996, "text": " shave off weeks or even months of work from your projects and is completely free for all"}, {"start": 388.47999999999996, "end": 392.52, "text": " individuals, academics and open source projects."}, {"start": 392.52, "end": 397.2, "text": " This really is as good as it gets and it is hardly a surprise that they are now used by"}, {"start": 397.2, "end": 401.08, "text": " over 200 companies and research institutions."}, {"start": 401.08, "end": 406.59999999999997, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 406.59999999999997, "end": 409.71999999999997, "text": " description and you can get a free demo today."}, {"start": 409.72, "end": 414.72, "text": " Our thanks to Wades and Biasis for their longstanding support and for helping us make better"}, {"start": 414.72, "end": 415.88000000000005, "text": " videos for you."}, {"start": 415.88, "end": 445.44, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=uuzow7TEQ1s
DeepMind’s AI Plays Catch…And So Much More! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Open-Ended Learning Leads to Generally Capable Agents" is available here: https://deepmind.com/blog/article/generally-capable-agents-emerge-from-open-ended-play https://deepmind.com/research/publications/open-ended-learning-leads-to-generally-capable-agents ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #deepmind
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to see how an AI can win a complex game that it has never seen before. Zero prior training on that game. Yes, really. Now, before that, for context, have a look at this related work from 2019, where scientists at OpenAI built a super-fun high-density game for their AI agents to play. And boy, did they do some crazy stuff. Now, these agents learned from previous experiences and to the surprise of no one, for the first few million rounds we start out with Pandemonium. Everyone just running around aimlessly. Then, over time, the hiders learned to lock out the seekers by blocking the doors off with these boxes and started winning consistently. I think the coolest part about this is that the map was deliberately designed by the OpenAI scientists in a way that the hiders can only succeed through collaboration. But then, something happened. Did you notice this pointy door-stop-shaped object? Are you thinking what I am thinking? Well, probably. And not only that, but later, the AI also discovered that it can be pushed near a wall and be used as a ramp and... Tada! Got him! Then, it was up to the hiders again to invent something new. So, did they do that? Can this crazy strategy be defeated? Well, check this out. These resourceful little critters learned that since there is a little time at the start of the game when the seekers are frozen, apparently, during this time, they cannot see them, so why not just sneak out and steal the ramp and lock it away from them? Absolutely incredible. Look at those happy eyes as they are carrying that ramp. But, today is not 2019, it is 2021, so I wonder what scientists at the other amazing AI lab deep-mind have been up to? Can this paper be topped? Well, believe it or not, they have managed to create something that is perhaps even crazier than this. This new paper proposes that these AI agents look at the screen just like a human wood and engage in open-ended learning where the tasks are always changing. What does this mean? Well, it means that these agents are not preparing for an exam. They are preparing for life. Hopefully, they learn more general concepts and, as a result, maybe excel at a variety of different tasks. Even better, these scientists at deep-mind claim that their AI agents not only excel at a variety of tasks, but they excel at new ones they have never seen before. Those are big words, so let's see the results. The red agent here is the Heider and the blue is the Seeker. They both understand their roles, the red agent is running, and the blue is seeking. Look, its viewing direction is shown with this lightsaber-looking line pointing at the red agent. No wonder it is running away. And look, it manages to get some distance from the Seeker and finds a new, previously unexplored part of the map and hides there. Excellent. And you would think that the Star Wars references and here, no, not even close. Look, in a more advanced variant of the game, this green Seeker lost the two other Heiders, and what does he do? Ah, yes, of course, grabs his lightsaber and takes the high ground. Then, it spuds the red agent and starts chasing it, and all this without ever having played this game before. That is excellent. In this cooperative game, the agents are asked to get as close to the purple pyramid as they can. Of course, to achieve that, they need to build a ramp, which they successfully realize. Excellent. But it gets better. Now, note that we did not say that the task is to build a ramp, the task is to get as close to the purple pyramid as we can. Does that mean... yes, yes it does. Great job bending the rules, little AI. In this game, the agent is asked to stop the purple ball from touching the red floor. At first, it tries its best to block the rolling of the ball with its body. Then, look, it realizes that it is much better to just push it against the wall. And it gets even better, look, it learned that best is to just chuck the ball behind this slab. It is completely right, this needs no further energy expenditure, and the ball never touches the red floor again. Great. And finally, in this king of the here game, the goal is to take the white floor and get the other agent out of there. As they are playing this game for the first time, they have no idea where the white floor is. As soon as the blue agent finds it, it stays there, so far so good. But this is not a cooperative game. We have an opponent here, look, boom, a quite potent opponent indeed who can take the blue agent out, and it understands that it has to camp in there and defend the region. Again, awesome. So, the goal here is not to be an expert in one game, but to be a journeyman in many games. And these agents are working really well at a variety of games without ever having played them. So, in summary, open AI's agent, expert in an narrower domain, deep minds agent, journeyman in a broader domain. Two different kinds of intelligence, both doing amazing things. Loving it. What a time to be alive. Scientists at DeepMind have knocked it out of the park with this one. They have also published AlphaFull this year a huge breakthrough that makes an AI-predict protein structures. Now, I saw some of you asking why we didn't cover it. Is it not an important work? Well, quite the opposite. I am spellbound by it, and I think that paper is a great gift to humanity. However, I try my best to educate myself on this topic, however, I don't feel that I am qualified to speak about it. Not yet, anyway. So, I think it is best to let the experts who know more about this take the stage. This is, of course, bad for views, but no matter, we are not maximizing views here, we are maximizing meaning. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. You recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdaleps.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 12.0, "text": " Today, we are going to see how an AI can win a complex game that it has never seen before."}, {"start": 12.0, "end": 15.0, "text": " Zero prior training on that game."}, {"start": 15.0, "end": 16.8, "text": " Yes, really."}, {"start": 16.8, "end": 22.3, "text": " Now, before that, for context, have a look at this related work from 2019,"}, {"start": 22.3, "end": 29.0, "text": " where scientists at OpenAI built a super-fun high-density game for their AI agents to play."}, {"start": 29.0, "end": 33.0, "text": " And boy, did they do some crazy stuff."}, {"start": 33.0, "end": 38.5, "text": " Now, these agents learned from previous experiences and to the surprise of no one,"}, {"start": 38.5, "end": 43.8, "text": " for the first few million rounds we start out with Pandemonium."}, {"start": 43.8, "end": 47.5, "text": " Everyone just running around aimlessly."}, {"start": 47.5, "end": 52.3, "text": " Then, over time, the hiders learned to lock out the seekers"}, {"start": 52.3, "end": 58.0, "text": " by blocking the doors off with these boxes and started winning consistently."}, {"start": 58.0, "end": 64.5, "text": " I think the coolest part about this is that the map was deliberately designed by the OpenAI scientists"}, {"start": 64.5, "end": 69.5, "text": " in a way that the hiders can only succeed through collaboration."}, {"start": 69.5, "end": 72.0, "text": " But then, something happened."}, {"start": 72.0, "end": 76.0, "text": " Did you notice this pointy door-stop-shaped object?"}, {"start": 76.0, "end": 79.0, "text": " Are you thinking what I am thinking?"}, {"start": 79.0, "end": 80.5, "text": " Well, probably."}, {"start": 80.5, "end": 87.5, "text": " And not only that, but later, the AI also discovered that it can be pushed near a wall"}, {"start": 87.5, "end": 91.5, "text": " and be used as a ramp and..."}, {"start": 91.5, "end": 94.0, "text": " Tada! Got him!"}, {"start": 94.0, "end": 99.0, "text": " Then, it was up to the hiders again to invent something new."}, {"start": 99.0, "end": 101.5, "text": " So, did they do that?"}, {"start": 101.5, "end": 104.5, "text": " Can this crazy strategy be defeated?"}, {"start": 104.5, "end": 106.5, "text": " Well, check this out."}, {"start": 106.5, "end": 112.0, "text": " These resourceful little critters learned that since there is a little time at the start of the game"}, {"start": 112.0, "end": 118.5, "text": " when the seekers are frozen, apparently, during this time, they cannot see them,"}, {"start": 118.5, "end": 124.5, "text": " so why not just sneak out and steal the ramp and lock it away from them?"}, {"start": 124.5, "end": 126.5, "text": " Absolutely incredible."}, {"start": 126.5, "end": 130.5, "text": " Look at those happy eyes as they are carrying that ramp."}, {"start": 130.5, "end": 138.5, "text": " But, today is not 2019, it is 2021, so I wonder what scientists at the other amazing AI lab"}, {"start": 138.5, "end": 140.5, "text": " deep-mind have been up to?"}, {"start": 140.5, "end": 143.0, "text": " Can this paper be topped?"}, {"start": 143.0, "end": 150.0, "text": " Well, believe it or not, they have managed to create something that is perhaps even crazier than this."}, {"start": 150.0, "end": 156.0, "text": " This new paper proposes that these AI agents look at the screen just like a human wood"}, {"start": 156.0, "end": 161.0, "text": " and engage in open-ended learning where the tasks are always changing."}, {"start": 161.0, "end": 163.0, "text": " What does this mean?"}, {"start": 163.0, "end": 167.0, "text": " Well, it means that these agents are not preparing for an exam."}, {"start": 167.0, "end": 169.5, "text": " They are preparing for life."}, {"start": 169.5, "end": 177.5, "text": " Hopefully, they learn more general concepts and, as a result, maybe excel at a variety of different tasks."}, {"start": 177.5, "end": 185.5, "text": " Even better, these scientists at deep-mind claim that their AI agents not only excel at a variety of tasks,"}, {"start": 185.5, "end": 190.0, "text": " but they excel at new ones they have never seen before."}, {"start": 190.0, "end": 194.0, "text": " Those are big words, so let's see the results."}, {"start": 194.0, "end": 198.5, "text": " The red agent here is the Heider and the blue is the Seeker."}, {"start": 198.5, "end": 204.5, "text": " They both understand their roles, the red agent is running, and the blue is seeking."}, {"start": 204.5, "end": 211.5, "text": " Look, its viewing direction is shown with this lightsaber-looking line pointing at the red agent."}, {"start": 211.5, "end": 213.5, "text": " No wonder it is running away."}, {"start": 213.5, "end": 224.0, "text": " And look, it manages to get some distance from the Seeker and finds a new, previously unexplored part of the map and hides there."}, {"start": 224.0, "end": 225.5, "text": " Excellent."}, {"start": 225.5, "end": 232.5, "text": " And you would think that the Star Wars references and here, no, not even close."}, {"start": 232.5, "end": 241.5, "text": " Look, in a more advanced variant of the game, this green Seeker lost the two other Heiders, and what does he do?"}, {"start": 241.5, "end": 247.5, "text": " Ah, yes, of course, grabs his lightsaber and takes the high ground."}, {"start": 247.5, "end": 255.5, "text": " Then, it spuds the red agent and starts chasing it, and all this without ever having played this game before."}, {"start": 255.5, "end": 257.5, "text": " That is excellent."}, {"start": 257.5, "end": 264.0, "text": " In this cooperative game, the agents are asked to get as close to the purple pyramid as they can."}, {"start": 264.0, "end": 272.0, "text": " Of course, to achieve that, they need to build a ramp, which they successfully realize. Excellent."}, {"start": 272.0, "end": 283.0, "text": " But it gets better. Now, note that we did not say that the task is to build a ramp, the task is to get as close to the purple pyramid as we can."}, {"start": 283.0, "end": 287.0, "text": " Does that mean... yes, yes it does."}, {"start": 287.0, "end": 290.0, "text": " Great job bending the rules, little AI."}, {"start": 290.0, "end": 295.5, "text": " In this game, the agent is asked to stop the purple ball from touching the red floor."}, {"start": 295.5, "end": 301.0, "text": " At first, it tries its best to block the rolling of the ball with its body."}, {"start": 301.0, "end": 308.0, "text": " Then, look, it realizes that it is much better to just push it against the wall."}, {"start": 308.0, "end": 315.0, "text": " And it gets even better, look, it learned that best is to just chuck the ball behind this slab."}, {"start": 315.0, "end": 325.0, "text": " It is completely right, this needs no further energy expenditure, and the ball never touches the red floor again. Great."}, {"start": 325.0, "end": 333.0, "text": " And finally, in this king of the here game, the goal is to take the white floor and get the other agent out of there."}, {"start": 333.0, "end": 339.0, "text": " As they are playing this game for the first time, they have no idea where the white floor is."}, {"start": 339.0, "end": 348.5, "text": " As soon as the blue agent finds it, it stays there, so far so good. But this is not a cooperative game."}, {"start": 348.5, "end": 356.5, "text": " We have an opponent here, look, boom, a quite potent opponent indeed who can take the blue agent out,"}, {"start": 356.5, "end": 362.5, "text": " and it understands that it has to camp in there and defend the region."}, {"start": 362.5, "end": 364.5, "text": " Again, awesome."}, {"start": 364.5, "end": 371.5, "text": " So, the goal here is not to be an expert in one game, but to be a journeyman in many games."}, {"start": 371.5, "end": 377.5, "text": " And these agents are working really well at a variety of games without ever having played them."}, {"start": 377.5, "end": 386.5, "text": " So, in summary, open AI's agent, expert in an narrower domain, deep minds agent, journeyman in a broader domain."}, {"start": 386.5, "end": 392.5, "text": " Two different kinds of intelligence, both doing amazing things."}, {"start": 392.5, "end": 399.5, "text": " Loving it. What a time to be alive. Scientists at DeepMind have knocked it out of the park with this one."}, {"start": 399.5, "end": 406.5, "text": " They have also published AlphaFull this year a huge breakthrough that makes an AI-predict protein structures."}, {"start": 406.5, "end": 410.5, "text": " Now, I saw some of you asking why we didn't cover it."}, {"start": 410.5, "end": 420.5, "text": " Is it not an important work? Well, quite the opposite. I am spellbound by it, and I think that paper is a great gift to humanity."}, {"start": 420.5, "end": 428.5, "text": " However, I try my best to educate myself on this topic, however, I don't feel that I am qualified to speak about it."}, {"start": 428.5, "end": 430.5, "text": " Not yet, anyway."}, {"start": 430.5, "end": 436.5, "text": " So, I think it is best to let the experts who know more about this take the stage."}, {"start": 436.5, "end": 444.5, "text": " This is, of course, bad for views, but no matter, we are not maximizing views here, we are maximizing meaning."}, {"start": 444.5, "end": 448.5, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 448.5, "end": 454.5, "text": " If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 454.5, "end": 468.5, "text": " You recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 468.5, "end": 473.5, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 473.5, "end": 482.5, "text": " Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers."}, {"start": 482.5, "end": 488.5, "text": " Make sure to go to lambdaleps.com, slash papers to sign up for one of their amazing GPU instances today."}, {"start": 488.5, "end": 494.5, "text": " Our thanks to Lambda for their long-standing support, and for helping us make better videos for you."}, {"start": 494.5, "end": 522.5, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=iZA9bl-t6J4
Virtual Bones Make Everything Better! 💪
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/jxmorris12/huggingface-demo/reports/A-Step-by-Step-Guide-to-Tracking-Hugging-Face-Model-Performance--VmlldzoxMDE2MTU 📝 The paper "Direct Delta Mush Skinning Compression with Continuous Examples" is available here: https://binh.graphics/papers/2021s-DDMC/ https://media.contentapi.ea.com/content/dam/ea/seed/presentations/ddm-compression-with-continuous-examples.pdf 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to see that virtual bones make everything better. This new paper is about setting up bones and joints for our virtual characters to be able to compute deformations. Deformations are at the heart of computer animation, look, all of these sequences require a carefully designed technique that can move these joints around and simulate its effect on the entirety of the body. Things move around, stretch and bulge. But, there is a problem. What's the problem? Well, even with state-of-the-art deformation techniques, sometimes this happens. Did you catch it? There is the problem. Look, the hip region unfortunately bulges inward. Is this specific to this technique? No, no, not in the slightest. Pretty much all of the previous techniques showcase that to some effect. This is perhaps our most intense case of this inward bulging. So, let's have a taste of the new method. How does it deal with a case like this? Perfectly. That's how. Loving it. Now, hold onto your papers because it works by creating something that the authors refer to as virtual bones. Let's look under the hood and locate them. There they are. These red dots showcase these virtual bones. We can set them up as a parameter and the algorithm distributes them automatically. Here we have a hundred of them, but we can go to 200, or if we so desire, we can request even 500 of them. So, what difference does this make? With a hundred virtual bones, let's see. Yes. Here you see that the cooler colors like blue showcase the regions that are deforming accurately and the warmer colors, for instance red, showcase the problematic regions where the technique did not perform well. The red part means that these deformations can be off by about 2 centimeters or about 3-quarters of an inch. I would say that is great news because even with only a hundred virtual bones we get an acceptable animation. However, the technique is still somewhat inaccurate around the knee and the hips. If you are one of our really precise fellow scholars and feel that even that tiny mismatch is too much we can raise the number of virtual bones to 500 and let's see. There we go. Still some imperfections around the knees, but the rest is accurate to a small fraction of an inch. Excellent. The hips and knees seem to be a common theme. Look, they show up in this example too. And as in the previous case, even the hundred virtual bone animation is acceptable and most of the problems can be remedied by adding 500 of them. Still some issues around the elbows. So far we have looked at the new solution and marked the good and bad regions with heat maps. So now how about looking at the reference footage and the new technique side by side? Why? Because we'll find out whether it is just good at fooling the human eye or does it really match up. Let's have a look together. This is linear blend skinning, instead of the art method. For now we can accept this as the reference. Note that setting this up is expensive both in terms of computation and it also requires a skilled artist to place these helper joints correctly. This looks great. So how does the new method with the virtual bones look under the hood? These correspond to those. So why do all this? Because the new method can be computed much, much cheaper. So let's see what the results look like. Mmm, yeah. Very close to the reference results. Absolutely amazing. Now let's run a torture test that would make any computer graphics researcher blush. Oh my, there are so many characters here animated at the same time. So how long do we have to wait for these accurate simulations? Minutes to hours? Let me know your guess in the comments below. I'll wait. Thank you. Now hold on to your papers because all this takes about 5 milliseconds per frame. 5 milliseconds. This seems well over 100 characters rocking out and the new technique doesn't even break a sweat. So I hope that with this computer animations are going to become a lot more realistic in the near future. What a time to be alive. Also make sure to have a look at the paper in the video description. I loved the beautiful mathematics and the clarity in there. It clearly states the contributions in a bulleted list, which is a more and more common occurrence that's good. But look, even provides an image of these contributions right there making it even clearer to the reader. Generally, details like this show that the authors went out of their way and spent a great deal of extra time writing a crystal clear paper. It takes much, much more time than many may imagine, so I would like to send a big thank you to the authors for that. Way to go. This episode has been supported by weights and biases. In this post, they show you how to use transformers from the hugging face library and how to track your model performance. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 9.9, "text": " Today, we are going to see that virtual bones make everything better."}, {"start": 9.9, "end": 14.8, "text": " This new paper is about setting up bones and joints for our virtual characters"}, {"start": 14.8, "end": 17.400000000000002, "text": " to be able to compute deformations."}, {"start": 17.400000000000002, "end": 21.3, "text": " Deformations are at the heart of computer animation,"}, {"start": 21.3, "end": 26.0, "text": " look, all of these sequences require a carefully designed technique"}, {"start": 26.0, "end": 31.8, "text": " that can move these joints around and simulate its effect on the entirety of the body."}, {"start": 31.8, "end": 35.7, "text": " Things move around, stretch and bulge."}, {"start": 35.7, "end": 38.6, "text": " But, there is a problem."}, {"start": 38.6, "end": 40.0, "text": " What's the problem?"}, {"start": 40.0, "end": 46.1, "text": " Well, even with state-of-the-art deformation techniques, sometimes this happens."}, {"start": 46.1, "end": 48.0, "text": " Did you catch it?"}, {"start": 48.0, "end": 49.400000000000006, "text": " There is the problem."}, {"start": 49.400000000000006, "end": 54.6, "text": " Look, the hip region unfortunately bulges inward."}, {"start": 54.6, "end": 56.9, "text": " Is this specific to this technique?"}, {"start": 56.9, "end": 59.2, "text": " No, no, not in the slightest."}, {"start": 59.2, "end": 63.5, "text": " Pretty much all of the previous techniques showcase that to some effect."}, {"start": 63.5, "end": 68.1, "text": " This is perhaps our most intense case of this inward bulging."}, {"start": 68.1, "end": 71.2, "text": " So, let's have a taste of the new method."}, {"start": 71.2, "end": 74.1, "text": " How does it deal with a case like this?"}, {"start": 74.1, "end": 75.1, "text": " Perfectly."}, {"start": 75.1, "end": 76.1, "text": " That's how."}, {"start": 76.1, "end": 77.7, "text": " Loving it."}, {"start": 77.7, "end": 82.4, "text": " Now, hold onto your papers because it works by creating something"}, {"start": 82.4, "end": 86.0, "text": " that the authors refer to as virtual bones."}, {"start": 86.0, "end": 90.10000000000001, "text": " Let's look under the hood and locate them."}, {"start": 90.10000000000001, "end": 91.2, "text": " There they are."}, {"start": 91.2, "end": 94.60000000000001, "text": " These red dots showcase these virtual bones."}, {"start": 94.60000000000001, "end": 101.9, "text": " We can set them up as a parameter and the algorithm distributes them automatically."}, {"start": 101.9, "end": 106.5, "text": " Here we have a hundred of them, but we can go to 200,"}, {"start": 106.5, "end": 111.9, "text": " or if we so desire, we can request even 500 of them."}, {"start": 111.9, "end": 114.80000000000001, "text": " So, what difference does this make?"}, {"start": 114.80000000000001, "end": 118.10000000000001, "text": " With a hundred virtual bones, let's see."}, {"start": 118.10000000000001, "end": 119.10000000000001, "text": " Yes."}, {"start": 119.10000000000001, "end": 122.0, "text": " Here you see that the cooler colors like blue"}, {"start": 122.0, "end": 125.10000000000001, "text": " showcase the regions that are deforming accurately"}, {"start": 125.10000000000001, "end": 128.1, "text": " and the warmer colors, for instance red,"}, {"start": 128.1, "end": 132.5, "text": " showcase the problematic regions where the technique did not perform well."}, {"start": 132.5, "end": 135.6, "text": " The red part means that these deformations can be off"}, {"start": 135.6, "end": 140.4, "text": " by about 2 centimeters or about 3-quarters of an inch."}, {"start": 140.4, "end": 145.5, "text": " I would say that is great news because even with only a hundred virtual bones"}, {"start": 145.5, "end": 148.1, "text": " we get an acceptable animation."}, {"start": 148.1, "end": 154.9, "text": " However, the technique is still somewhat inaccurate around the knee and the hips."}, {"start": 154.9, "end": 158.20000000000002, "text": " If you are one of our really precise fellow scholars"}, {"start": 158.20000000000002, "end": 161.6, "text": " and feel that even that tiny mismatch is too much"}, {"start": 161.6, "end": 168.3, "text": " we can raise the number of virtual bones to 500 and let's see."}, {"start": 168.3, "end": 169.5, "text": " There we go."}, {"start": 169.5, "end": 172.3, "text": " Still some imperfections around the knees,"}, {"start": 172.3, "end": 176.9, "text": " but the rest is accurate to a small fraction of an inch."}, {"start": 176.9, "end": 178.1, "text": " Excellent."}, {"start": 178.1, "end": 181.0, "text": " The hips and knees seem to be a common theme."}, {"start": 181.0, "end": 184.5, "text": " Look, they show up in this example too."}, {"start": 184.5, "end": 190.3, "text": " And as in the previous case, even the hundred virtual bone animation is acceptable"}, {"start": 190.3, "end": 195.3, "text": " and most of the problems can be remedied by adding 500 of them."}, {"start": 195.3, "end": 198.1, "text": " Still some issues around the elbows."}, {"start": 198.1, "end": 200.9, "text": " So far we have looked at the new solution"}, {"start": 200.9, "end": 205.29999999999998, "text": " and marked the good and bad regions with heat maps."}, {"start": 205.29999999999998, "end": 208.29999999999998, "text": " So now how about looking at the reference footage"}, {"start": 208.29999999999998, "end": 211.6, "text": " and the new technique side by side?"}, {"start": 211.6, "end": 212.6, "text": " Why?"}, {"start": 212.6, "end": 217.9, "text": " Because we'll find out whether it is just good at fooling the human eye"}, {"start": 217.9, "end": 220.79999999999998, "text": " or does it really match up."}, {"start": 220.79999999999998, "end": 222.4, "text": " Let's have a look together."}, {"start": 222.4, "end": 226.4, "text": " This is linear blend skinning, instead of the art method."}, {"start": 226.4, "end": 229.70000000000002, "text": " For now we can accept this as the reference."}, {"start": 229.70000000000002, "end": 234.70000000000002, "text": " Note that setting this up is expensive both in terms of computation"}, {"start": 234.70000000000002, "end": 237.70000000000002, "text": " and it also requires a skilled artist"}, {"start": 237.70000000000002, "end": 240.6, "text": " to place these helper joints correctly."}, {"start": 240.6, "end": 242.4, "text": " This looks great."}, {"start": 242.4, "end": 247.1, "text": " So how does the new method with the virtual bones look under the hood?"}, {"start": 247.1, "end": 249.6, "text": " These correspond to those."}, {"start": 249.6, "end": 252.3, "text": " So why do all this?"}, {"start": 252.3, "end": 256.90000000000003, "text": " Because the new method can be computed much, much cheaper."}, {"start": 256.90000000000003, "end": 260.40000000000003, "text": " So let's see what the results look like."}, {"start": 260.40000000000003, "end": 262.6, "text": " Mmm, yeah."}, {"start": 262.6, "end": 265.0, "text": " Very close to the reference results."}, {"start": 265.0, "end": 266.7, "text": " Absolutely amazing."}, {"start": 266.7, "end": 273.0, "text": " Now let's run a torture test that would make any computer graphics researcher blush."}, {"start": 273.0, "end": 278.8, "text": " Oh my, there are so many characters here animated at the same time."}, {"start": 278.8, "end": 282.6, "text": " So how long do we have to wait for these accurate simulations?"}, {"start": 282.6, "end": 284.6, "text": " Minutes to hours?"}, {"start": 284.6, "end": 287.1, "text": " Let me know your guess in the comments below."}, {"start": 287.1, "end": 289.2, "text": " I'll wait."}, {"start": 289.2, "end": 290.1, "text": " Thank you."}, {"start": 290.1, "end": 296.6, "text": " Now hold on to your papers because all this takes about 5 milliseconds per frame."}, {"start": 296.6, "end": 299.40000000000003, "text": " 5 milliseconds."}, {"start": 299.40000000000003, "end": 304.1, "text": " This seems well over 100 characters rocking out and the new technique"}, {"start": 304.1, "end": 306.5, "text": " doesn't even break a sweat."}, {"start": 306.5, "end": 313.4, "text": " So I hope that with this computer animations are going to become a lot more realistic in the near future."}, {"start": 313.4, "end": 315.5, "text": " What a time to be alive."}, {"start": 315.5, "end": 319.2, "text": " Also make sure to have a look at the paper in the video description."}, {"start": 319.2, "end": 323.3, "text": " I loved the beautiful mathematics and the clarity in there."}, {"start": 323.3, "end": 326.5, "text": " It clearly states the contributions in a bulleted list,"}, {"start": 326.5, "end": 329.8, "text": " which is a more and more common occurrence that's good."}, {"start": 329.8, "end": 338.1, "text": " But look, even provides an image of these contributions right there making it even clearer to the reader."}, {"start": 338.1, "end": 343.1, "text": " Generally, details like this show that the authors went out of their way"}, {"start": 343.1, "end": 348.0, "text": " and spent a great deal of extra time writing a crystal clear paper."}, {"start": 348.0, "end": 351.8, "text": " It takes much, much more time than many may imagine,"}, {"start": 351.8, "end": 356.0, "text": " so I would like to send a big thank you to the authors for that."}, {"start": 356.0, "end": 357.3, "text": " Way to go."}, {"start": 357.3, "end": 360.6, "text": " This episode has been supported by weights and biases."}, {"start": 360.6, "end": 365.1, "text": " In this post, they show you how to use transformers from the hugging face library"}, {"start": 365.1, "end": 368.1, "text": " and how to track your model performance."}, {"start": 368.1, "end": 373.6, "text": " During my PhD studies, I trained a ton of neural networks which were used in our experiments."}, {"start": 373.6, "end": 378.2, "text": " However, over time, there was just too much data in our repositories"}, {"start": 378.2, "end": 381.90000000000003, "text": " and what I am looking for is not data, but insight."}, {"start": 381.90000000000003, "end": 386.90000000000003, "text": " And that's exactly how weights and biases helps you by organizing your experiments."}, {"start": 386.9, "end": 390.7, "text": " It is used by more than 200 companies and research institutions,"}, {"start": 390.7, "end": 395.0, "text": " including OpenAI, Toyota Research, GitHub, and more."}, {"start": 395.0, "end": 402.09999999999997, "text": " And get this, weight and biases is free for all individuals, academics, and open source projects."}, {"start": 402.09999999999997, "end": 406.2, "text": " Make sure to visit them through wnba.com slash papers"}, {"start": 406.2, "end": 411.09999999999997, "text": " or just click the link in the video description and you can get a free demo today."}, {"start": 411.1, "end": 417.3, "text": " Our thanks to weights and biases for their long standing support and for helping us make better videos for you."}, {"start": 417.3, "end": 447.1, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=eQRZ7FUkwKo
Simulating Bursting Soap Bubbles! 🧼
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Thin-Film Smoothed Particle Hydrodynamics Fluid" is available here: https://arxiv.org/abs/2105.07656 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Thumbnail background image credit: https://pixabay.com/images/id-801835/ Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir. Today, we are going to simulate these absolutely beautiful thin film structures. You see, computer graphics researchers have been writing physics simulation programs for decades now, and the pace of progress in this research area is absolutely stunning. Here are three examples of where we are at the moment. One, this work was able to create a breathtaking honey-coiling simulation. I find it absolutely amazing that through the power of computer graphics research, all this was possible four years ago. And the realistic simulation works just kept coming in. This work appeared just one year ago and could simulate not only a piece of viscous fluid, but also deal with glugging and coalescing bubbles. And three, this particular one is blazing fast. So much so that it can simulate this dam-brake scene in about five frames per second, not seconds per frame, while it can run this water drop scene with about seven frames per second. Remember, this simulates quantities like velocity, pressure, and more for several million particles, this quickly. Very impressive. So, are we done here? Is there anything else left to be done in fluid simulation research? Well, hold on to your papers and check this out. This new paper can simulate thin film phenomena. What does that mean? Well, four things. First, here is a beautiful, oscillating soap bubble. Yes, its color varies as a function of devolving film thickness, but that's not all. Let's poke it and then... Did you see that? It can even simulate it bursting into tiny, sparkly droplets. Phew! One more time. Loving it. Second, it can simulate one of my favorites, the Rayleigh Taylor Instability. The upper half of the thin film has a larger density, while the lower half carries a larger volume. Essentially, this is the phenomenon when two fluids of different densities meet. And, what is the result? Turbulence. First, the interface between the two is well defined, but over time, it slowly disintegrates into this beautiful, swirly pattern. Oh, yeah! Look! And it just keeps on going and going. Third, ah, yes! The catanoid experiment. What is that? This is a surface tension driven deformation experiment, where the film is trying to shrink as we move the two rims away from each other, forming this catanoid surface. Of course, we won't stop there. What happens when we keep moving them away? What do you think? Please stop the video and let me know in the comments below. I'll wait a little. Thank you. Now then, the membrane keeps shrinking until, yes, it finally collapses into a small droplet. The authors also went the extra mile and did the most difficult thing for any physics simulation paper, comparing the results to reality. So, is this just good enough to fool the untrained human eye or is this the real deal? Well, look at this. This is an actual photograph of the catanoid experiment. And this is the simulation. Dear Fellow Scholars, that is a clean simulation right there. And fourth, a thin film within a square is subjected to a gravitational pull that is changing over time. And the result is more swirly patterns. So, how quickly can we perform all this? This regard the FPS, this is the inverse of the time-step size and is mainly information for fellow researchers, for now, gives upon the time-perfume column, and my goodness, this is blazing fast too. It takes less than a second per frame for the catanoid experiment, this is one of the cheaper ones. And all this on a laptop. Wow! Now, the most expensive experiment in this paper was the really tailored instability. This took about 13 seconds per frame. This is not bad at all, we can get a proper simulation of this quality within an hour or so. However, note that the authors also used a big honking machine to compute this scene. And remember, this paper is not about optimization, but it is about making the impossible possible. And it is doing all that swiftly. Huge congratulations to the authors. What a time to be alive! PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Thanks to perceptiLabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 4.8, "end": 10.8, "text": " Today, we are going to simulate these absolutely beautiful thin film structures."}, {"start": 10.8, "end": 17.0, "text": " You see, computer graphics researchers have been writing physics simulation programs for decades now,"}, {"start": 17.0, "end": 21.8, "text": " and the pace of progress in this research area is absolutely stunning."}, {"start": 21.8, "end": 25.400000000000002, "text": " Here are three examples of where we are at the moment."}, {"start": 25.4, "end": 31.2, "text": " One, this work was able to create a breathtaking honey-coiling simulation."}, {"start": 31.2, "end": 36.4, "text": " I find it absolutely amazing that through the power of computer graphics research,"}, {"start": 36.4, "end": 39.6, "text": " all this was possible four years ago."}, {"start": 39.6, "end": 43.599999999999994, "text": " And the realistic simulation works just kept coming in."}, {"start": 43.599999999999994, "end": 49.599999999999994, "text": " This work appeared just one year ago and could simulate not only a piece of viscous fluid,"}, {"start": 49.599999999999994, "end": 54.599999999999994, "text": " but also deal with glugging and coalescing bubbles."}, {"start": 54.6, "end": 58.800000000000004, "text": " And three, this particular one is blazing fast."}, {"start": 58.800000000000004, "end": 64.8, "text": " So much so that it can simulate this dam-brake scene in about five frames per second,"}, {"start": 64.8, "end": 72.4, "text": " not seconds per frame, while it can run this water drop scene with about seven frames per second."}, {"start": 72.4, "end": 79.6, "text": " Remember, this simulates quantities like velocity, pressure, and more for several million particles,"}, {"start": 79.6, "end": 81.0, "text": " this quickly."}, {"start": 81.0, "end": 82.8, "text": " Very impressive."}, {"start": 82.8, "end": 85.0, "text": " So, are we done here?"}, {"start": 85.0, "end": 89.2, "text": " Is there anything else left to be done in fluid simulation research?"}, {"start": 89.2, "end": 93.39999999999999, "text": " Well, hold on to your papers and check this out."}, {"start": 93.39999999999999, "end": 98.0, "text": " This new paper can simulate thin film phenomena."}, {"start": 98.0, "end": 99.6, "text": " What does that mean?"}, {"start": 99.6, "end": 101.4, "text": " Well, four things."}, {"start": 101.4, "end": 106.0, "text": " First, here is a beautiful, oscillating soap bubble."}, {"start": 106.0, "end": 111.0, "text": " Yes, its color varies as a function of devolving film thickness,"}, {"start": 111.0, "end": 113.0, "text": " but that's not all."}, {"start": 113.0, "end": 116.0, "text": " Let's poke it and then..."}, {"start": 116.0, "end": 117.4, "text": " Did you see that?"}, {"start": 117.4, "end": 122.4, "text": " It can even simulate it bursting into tiny, sparkly droplets."}, {"start": 122.4, "end": 124.0, "text": " Phew!"}, {"start": 124.0, "end": 127.0, "text": " One more time."}, {"start": 127.0, "end": 129.0, "text": " Loving it."}, {"start": 129.0, "end": 134.6, "text": " Second, it can simulate one of my favorites, the Rayleigh Taylor Instability."}, {"start": 134.6, "end": 138.4, "text": " The upper half of the thin film has a larger density,"}, {"start": 138.4, "end": 142.4, "text": " while the lower half carries a larger volume."}, {"start": 142.4, "end": 148.4, "text": " Essentially, this is the phenomenon when two fluids of different densities meet."}, {"start": 148.4, "end": 151.0, "text": " And, what is the result?"}, {"start": 151.0, "end": 152.4, "text": " Turbulence."}, {"start": 152.4, "end": 156.0, "text": " First, the interface between the two is well defined,"}, {"start": 156.0, "end": 162.4, "text": " but over time, it slowly disintegrates into this beautiful, swirly pattern."}, {"start": 162.4, "end": 163.4, "text": " Oh, yeah!"}, {"start": 163.4, "end": 165.4, "text": " Look!"}, {"start": 165.4, "end": 169.4, "text": " And it just keeps on going and going."}, {"start": 169.4, "end": 172.4, "text": " Third, ah, yes!"}, {"start": 172.4, "end": 174.4, "text": " The catanoid experiment."}, {"start": 174.4, "end": 176.4, "text": " What is that?"}, {"start": 176.4, "end": 179.4, "text": " This is a surface tension driven deformation experiment,"}, {"start": 179.4, "end": 184.4, "text": " where the film is trying to shrink as we move the two rims away from each other,"}, {"start": 184.4, "end": 186.4, "text": " forming this catanoid surface."}, {"start": 186.4, "end": 189.4, "text": " Of course, we won't stop there."}, {"start": 189.4, "end": 192.4, "text": " What happens when we keep moving them away?"}, {"start": 192.4, "end": 194.4, "text": " What do you think?"}, {"start": 194.4, "end": 197.4, "text": " Please stop the video and let me know in the comments below."}, {"start": 197.4, "end": 199.4, "text": " I'll wait a little."}, {"start": 199.4, "end": 200.4, "text": " Thank you."}, {"start": 200.4, "end": 208.4, "text": " Now then, the membrane keeps shrinking until, yes, it finally collapses into a small droplet."}, {"start": 208.4, "end": 215.4, "text": " The authors also went the extra mile and did the most difficult thing for any physics simulation paper,"}, {"start": 215.4, "end": 218.4, "text": " comparing the results to reality."}, {"start": 218.4, "end": 225.4, "text": " So, is this just good enough to fool the untrained human eye or is this the real deal?"}, {"start": 225.4, "end": 228.4, "text": " Well, look at this."}, {"start": 228.4, "end": 232.4, "text": " This is an actual photograph of the catanoid experiment."}, {"start": 232.4, "end": 235.4, "text": " And this is the simulation."}, {"start": 235.4, "end": 241.4, "text": " Dear Fellow Scholars, that is a clean simulation right there."}, {"start": 241.4, "end": 249.4, "text": " And fourth, a thin film within a square is subjected to a gravitational pull that is changing over time."}, {"start": 249.4, "end": 254.4, "text": " And the result is more swirly patterns."}, {"start": 254.4, "end": 258.4, "text": " So, how quickly can we perform all this?"}, {"start": 258.4, "end": 262.4, "text": " This regard the FPS, this is the inverse of the time-step size"}, {"start": 262.4, "end": 265.4, "text": " and is mainly information for fellow researchers,"}, {"start": 265.4, "end": 273.4, "text": " for now, gives upon the time-perfume column, and my goodness, this is blazing fast too."}, {"start": 273.4, "end": 280.4, "text": " It takes less than a second per frame for the catanoid experiment, this is one of the cheaper ones."}, {"start": 280.4, "end": 283.4, "text": " And all this on a laptop."}, {"start": 283.4, "end": 284.4, "text": " Wow!"}, {"start": 284.4, "end": 289.4, "text": " Now, the most expensive experiment in this paper was the really tailored instability."}, {"start": 289.4, "end": 293.4, "text": " This took about 13 seconds per frame."}, {"start": 293.4, "end": 300.4, "text": " This is not bad at all, we can get a proper simulation of this quality within an hour or so."}, {"start": 300.4, "end": 306.4, "text": " However, note that the authors also used a big honking machine to compute this scene."}, {"start": 306.4, "end": 314.4, "text": " And remember, this paper is not about optimization, but it is about making the impossible possible."}, {"start": 314.4, "end": 317.4, "text": " And it is doing all that swiftly."}, {"start": 317.4, "end": 319.4, "text": " Huge congratulations to the authors."}, {"start": 319.4, "end": 324.4, "text": " What a time to be alive! PerceptiLabs is a visual API for TensorFlow"}, {"start": 324.4, "end": 328.4, "text": " carefully designed to make machine learning as intuitive as possible."}, {"start": 328.4, "end": 332.4, "text": " This gives you a faster way to build out models with more transparency"}, {"start": 332.4, "end": 337.4, "text": " into how your model is architected, how it performs, and how to debug it."}, {"start": 337.4, "end": 342.4, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 342.4, "end": 346.4, "text": " It even generates visualizations for all the model variables"}, {"start": 346.4, "end": 350.4, "text": " and gives you recommendations both during modeling and training,"}, {"start": 350.4, "end": 352.4, "text": " and does all this automatically."}, {"start": 352.4, "end": 358.4, "text": " I only wish I had a tool like this when I was working on my neural networks during my PhD years."}, {"start": 358.4, "end": 365.4, "text": " Visit perceptiLabs.com slash papers to easily install the free local version of their system today."}, {"start": 365.4, "end": 370.4, "text": " Thanks to perceptiLabs for their support and for helping us make better videos for you."}, {"start": 370.4, "end": 376.4, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=G00A1Fyr5ZQ
New AI Research Work Fixes Your Choppy Videos! 🎬
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Time Lens: Event-based Video Frame Interpolation" is available here: http://rpg.ifi.uzh.ch/TimeLens.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we are going to take a bad choppy video and make a beautiful, smooth and creamy footage out of it. With today's camera and graphics technology, we can create videos with 60 frames per second. Those are really smooth. I also make each of these videos using 60 frames per second. However, it almost always happens that I encounter paper videos that have only from 24 to 30 frames per second. In this case, I put them in my video editor that has a 60 FPS timeline where half or even more of these frames will not provide any new information. That is neither smooth nor creamy. And it gets worse. Look, as we try to slow down the videos for some nice slow motion action, this ratio becomes even worse creating an extremely choppy output video because we have huge gaps between these frames. So, does this mean that there is nothing we can do and have to put up with this choppy footage? No, not at all. Look at this technique from 2019 that we covered in an earlier video. The results truly speak for themselves. In goes a choppy video and out comes a smooth and creamy result. So good. But wait, it is not 2019, it is 2021 and we always say that two more papers down the line and it will be improved significantly. From this example, it seems that we are done here, we don't need any new papers. Is that so? Well, let's see what we have only one more paper down the line. Now, look, it promises that it can deal with 10 to 1 or even 221 ratios, which means that for every single image in the video, it creates 10 or 20 new ones and supposedly we shouldn't notice that. Well, those are big words, so I will believe it when I see it. Let's have a look together. Holy matter of papers, this can really pull this off and it seems nearly perfect. Wow! It also knocked it out of the park with this one and all this improvement in just one more paper down the line. The pace of progress in machine learning research is absolutely amazing. But we are experienced fellow scholars over here, so we will immediately ask, is this really better than the previous 2019 paper? Let's compare them. Can we have side by side comparisons? Of course we can. You know how much I love fluid simulations? Well, these are not simulations, but a real piece of fluid and in this one there is no contest. The new one understands the flow so much better, while the previous method sometimes even seems to propagate the waves backwards in time. A big check mark for the new one. In this case, the previous method assumes linear motion when it shouldn't, thereby introducing a ton of artifacts. The new one isn't perfect either, but it performs significantly better. Don't not worry for a second, we will talk about linear motions some more in a moment. So, how does all this wizardry happen? One of the key contributions of the paper is that it can find out when to use the easy way and the hard way. What are those? The easy way is using already existing information in the video and computing in between states for a movement. That is all well and good, if we have simple linear motion in our video. But look, the easy way fails here. Why is that? It fails because we have a difficult situation where reflections off of this object rapidly change and it reflects something. We have to know what that something is. So look, this is not even close to the true image, which means that here we can't just reuse the information in the video, this requires introducing new information. Yes, that is the hard way and this excels when new information has to be synthesized. Let's see how well it does. My goodness, look at that, it matches the true reference image almost perfectly. And also, look, the face of the human did not require synthesizing a great deal of new information, it did not change over time, so we can easily refer to the previous frame for it, hence the easy way did better here. Did you notice? That is fantastic because the two are complementary. Both techniques work well, but they work well elsewhere. They need each other. So yes, you guessed right, to tie it all together, there is an attention-based averaging step that helps us decide when to use the easy and the hard ways. Now, this is a good paper, so it tells us how these individual techniques contribute to the final image. Using only the easy way can give us about 26 decibels, that would not be the previous methods in this area. However, look, by adding the hard way, we get a premium quality result that is already super competitive, and if we add the step that helps us decide when to use the easy and hard ways, we get an extra decibel. I will happily take it, thank you very much. And if we put it all together, oh yes, we get a technique that really outpaces the competition. Excellent. So, in the near future, perhaps we will be able to record a choppy video of a family festivity and have a chance at making this choppy video enjoyable, or maybe even create slow motion videos with a regular camera. No slow motion camera is required. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 5.0, "end": 14.0, "text": " Today we are going to take a bad choppy video and make a beautiful, smooth and creamy footage out of it."}, {"start": 14.0, "end": 20.0, "text": " With today's camera and graphics technology, we can create videos with 60 frames per second."}, {"start": 20.0, "end": 26.0, "text": " Those are really smooth. I also make each of these videos using 60 frames per second."}, {"start": 26.0, "end": 35.0, "text": " However, it almost always happens that I encounter paper videos that have only from 24 to 30 frames per second."}, {"start": 35.0, "end": 47.0, "text": " In this case, I put them in my video editor that has a 60 FPS timeline where half or even more of these frames will not provide any new information."}, {"start": 47.0, "end": 51.0, "text": " That is neither smooth nor creamy. And it gets worse."}, {"start": 51.0, "end": 65.0, "text": " Look, as we try to slow down the videos for some nice slow motion action, this ratio becomes even worse creating an extremely choppy output video because we have huge gaps between these frames."}, {"start": 65.0, "end": 71.0, "text": " So, does this mean that there is nothing we can do and have to put up with this choppy footage?"}, {"start": 71.0, "end": 78.0, "text": " No, not at all. Look at this technique from 2019 that we covered in an earlier video."}, {"start": 78.0, "end": 88.0, "text": " The results truly speak for themselves. In goes a choppy video and out comes a smooth and creamy result. So good."}, {"start": 88.0, "end": 98.0, "text": " But wait, it is not 2019, it is 2021 and we always say that two more papers down the line and it will be improved significantly."}, {"start": 98.0, "end": 109.0, "text": " From this example, it seems that we are done here, we don't need any new papers. Is that so? Well, let's see what we have only one more paper down the line."}, {"start": 109.0, "end": 126.0, "text": " Now, look, it promises that it can deal with 10 to 1 or even 221 ratios, which means that for every single image in the video, it creates 10 or 20 new ones and supposedly we shouldn't notice that."}, {"start": 126.0, "end": 134.0, "text": " Well, those are big words, so I will believe it when I see it. Let's have a look together."}, {"start": 134.0, "end": 149.0, "text": " Holy matter of papers, this can really pull this off and it seems nearly perfect. Wow! It also knocked it out of the park with this one and all this improvement in just one more paper down the line."}, {"start": 149.0, "end": 164.0, "text": " The pace of progress in machine learning research is absolutely amazing. But we are experienced fellow scholars over here, so we will immediately ask, is this really better than the previous 2019 paper?"}, {"start": 164.0, "end": 170.0, "text": " Let's compare them. Can we have side by side comparisons? Of course we can."}, {"start": 170.0, "end": 180.0, "text": " You know how much I love fluid simulations? Well, these are not simulations, but a real piece of fluid and in this one there is no contest."}, {"start": 180.0, "end": 192.0, "text": " The new one understands the flow so much better, while the previous method sometimes even seems to propagate the waves backwards in time. A big check mark for the new one."}, {"start": 192.0, "end": 201.0, "text": " In this case, the previous method assumes linear motion when it shouldn't, thereby introducing a ton of artifacts."}, {"start": 201.0, "end": 207.0, "text": " The new one isn't perfect either, but it performs significantly better."}, {"start": 207.0, "end": 212.0, "text": " Don't not worry for a second, we will talk about linear motions some more in a moment."}, {"start": 212.0, "end": 215.0, "text": " So, how does all this wizardry happen?"}, {"start": 215.0, "end": 225.0, "text": " One of the key contributions of the paper is that it can find out when to use the easy way and the hard way. What are those?"}, {"start": 225.0, "end": 238.0, "text": " The easy way is using already existing information in the video and computing in between states for a movement. That is all well and good, if we have simple linear motion in our video."}, {"start": 238.0, "end": 252.0, "text": " But look, the easy way fails here. Why is that? It fails because we have a difficult situation where reflections off of this object rapidly change and it reflects something."}, {"start": 252.0, "end": 267.0, "text": " We have to know what that something is. So look, this is not even close to the true image, which means that here we can't just reuse the information in the video, this requires introducing new information."}, {"start": 267.0, "end": 276.0, "text": " Yes, that is the hard way and this excels when new information has to be synthesized. Let's see how well it does."}, {"start": 276.0, "end": 282.0, "text": " My goodness, look at that, it matches the true reference image almost perfectly."}, {"start": 282.0, "end": 295.0, "text": " And also, look, the face of the human did not require synthesizing a great deal of new information, it did not change over time, so we can easily refer to the previous frame for it,"}, {"start": 295.0, "end": 311.0, "text": " hence the easy way did better here. Did you notice? That is fantastic because the two are complementary. Both techniques work well, but they work well elsewhere. They need each other."}, {"start": 311.0, "end": 322.0, "text": " So yes, you guessed right, to tie it all together, there is an attention-based averaging step that helps us decide when to use the easy and the hard ways."}, {"start": 322.0, "end": 330.0, "text": " Now, this is a good paper, so it tells us how these individual techniques contribute to the final image."}, {"start": 330.0, "end": 338.0, "text": " Using only the easy way can give us about 26 decibels, that would not be the previous methods in this area."}, {"start": 338.0, "end": 355.0, "text": " However, look, by adding the hard way, we get a premium quality result that is already super competitive, and if we add the step that helps us decide when to use the easy and hard ways, we get an extra decibel."}, {"start": 355.0, "end": 366.0, "text": " I will happily take it, thank you very much. And if we put it all together, oh yes, we get a technique that really outpaces the competition."}, {"start": 366.0, "end": 385.0, "text": " Excellent. So, in the near future, perhaps we will be able to record a choppy video of a family festivity and have a chance at making this choppy video enjoyable, or maybe even create slow motion videos with a regular camera."}, {"start": 385.0, "end": 399.0, "text": " No slow motion camera is required. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 399.0, "end": 419.0, "text": " They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 419.0, "end": 434.0, "text": " Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 434.0, "end": 440.0, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 440.0, "end": 450.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-4M-xoE6iH0
This AI Creates Dessert Photos...And More! 🍰
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "NeX: Real-time View Synthesis with Neural Basis Expansion " is available here: https://nex-mpi.github.io/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Thumbnail background image credit: https://pixabay.com/images/id-5712284/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to synthesize beautiful and photorealistic shiny objects. This amazing new technique is a nerve variant, which means that it is a learning-based algorithm that tries to reproduce real-world scenes from only a few views. It ingoses a few photos of a scene, and it has to be able to synthesize new photorealistic images in between these photos. This is view synthesis in short. As you see here, it can be done quite well with the previous method. So, our question number one is, why bother publishing a new research paper on this? And question number two, there are plenty of use synthesis papers sloshing around, so why choose this paper? Well, first, when we tested the original nerve method, I noted that thin structures are still quite a challenge. Look, we have some issues here. Okay, that is a good opening for a potential follow-up paper. And in a moment, we'll see how the new method handles them. Can it do any better? In the green close-up, you see that these structures are blurry and like definition, and, oh-oh, it gets even worse. The red example shows that these structures are completely missing at places. And given that we have zoomed in quite far, and this previous technique is from just one year ago, I wonder how much of an improvement can we expect in just one year? Let's have a look. Wow, that is outstanding. Both problems got solved just like that. But it does more. Way more. This new method also boasts being able to measure reflectance coefficients for every single pixel. That sounds great, but what does that mean? What this should mean is that the reflections and specular highlights in its outputs are supposed to be much better. These are view-dependent effects, so they are quite easy to find. What you need to look for is things that change when we move the camera around. Let's test it. Remember, the input is just a sparse bunch of photos. Here they are, and the AI is supposed to fill in the missing data and produce a smooth video with, hopefully, high quality specular highlights. These are especially difficult to get right, because they change a great deal when we move the camera just a little. Yet, look, the AI can still deal with them really well. So, yes, this looks good. But we are experienced fellow scholars over here, so we'll put this method to the test. Let's try a challenging test for rendering reflections by using this piece of ancient technology called a CD. Not the best for data storage these days, but fantastic for testing rendering algorithms. So, I would like to see two things. One is the environment reflected in the silvery regions. Did we get it? Yes, checkmark. And two, I would like to see the rainbow changing and bending as we move the camera around. Let's see. Oh, look at how beautifully it has done it. I love it. Checkmark. These specular objects are not just a CD thing. As you see here, they are absolutely everywhere around us. Therefore, it is important to get it right for view synthesis. And I must say, these results are very close to getting good enough where we can put on a VR headset and venture into what is essentially a bunch of photos, not even a video, and make it feel like we are really in the room. Absolutely amazing. When I read the paper, I thought, well, all that's great, but we probably need to wait forever to get results like this. And now, hold on to your papers because this technique is not only much better in terms of quality, no, no, it is more than a thousand times faster. A thousand times. In one year, I tried to hold on to my papers, but I have to admit that I have failed and now they are flying about. But that's not all. Not even close. Look, it can also decouple the camera movements from this view-dependent, specular effect. So, what does that mean? It is like a thought experiment where we keep the camera stationary and let the shiny things change as if we moved around. The paper also contains source code and a web demo that you can try yourself right now. It reveals that we still have some more to go until the true real images can be reproduced, but my goodness, this is so much progress in just one paper. Absolutely mind-blowing. Now, this technique also has its limitations beyond not being as good as the real photos, for instance, the reproduction of refracted thin structures. As you see, not so good. But, just think about it. The fact that we need to make up these crazy scenes to be able to give it trouble is a true testament to how good this technique is. And all this improvement in just one year. What a time to be alive. This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 5.0, "end": 11.0, "text": " Today, we are going to synthesize beautiful and photorealistic shiny objects."}, {"start": 11.0, "end": 17.0, "text": " This amazing new technique is a nerve variant, which means that it is a learning-based algorithm"}, {"start": 17.0, "end": 23.0, "text": " that tries to reproduce real-world scenes from only a few views."}, {"start": 23.0, "end": 36.0, "text": " It ingoses a few photos of a scene, and it has to be able to synthesize new photorealistic images in between these photos."}, {"start": 36.0, "end": 39.0, "text": " This is view synthesis in short."}, {"start": 39.0, "end": 43.0, "text": " As you see here, it can be done quite well with the previous method."}, {"start": 43.0, "end": 50.0, "text": " So, our question number one is, why bother publishing a new research paper on this?"}, {"start": 50.0, "end": 58.0, "text": " And question number two, there are plenty of use synthesis papers sloshing around, so why choose this paper?"}, {"start": 58.0, "end": 66.0, "text": " Well, first, when we tested the original nerve method, I noted that thin structures are still quite a challenge."}, {"start": 66.0, "end": 70.0, "text": " Look, we have some issues here."}, {"start": 70.0, "end": 75.0, "text": " Okay, that is a good opening for a potential follow-up paper."}, {"start": 75.0, "end": 79.0, "text": " And in a moment, we'll see how the new method handles them."}, {"start": 79.0, "end": 81.0, "text": " Can it do any better?"}, {"start": 81.0, "end": 90.0, "text": " In the green close-up, you see that these structures are blurry and like definition, and, oh-oh, it gets even worse."}, {"start": 90.0, "end": 95.0, "text": " The red example shows that these structures are completely missing at places."}, {"start": 95.0, "end": 102.0, "text": " And given that we have zoomed in quite far, and this previous technique is from just one year ago,"}, {"start": 102.0, "end": 108.0, "text": " I wonder how much of an improvement can we expect in just one year?"}, {"start": 108.0, "end": 110.0, "text": " Let's have a look."}, {"start": 110.0, "end": 117.0, "text": " Wow, that is outstanding. Both problems got solved just like that."}, {"start": 117.0, "end": 121.0, "text": " But it does more. Way more."}, {"start": 121.0, "end": 128.0, "text": " This new method also boasts being able to measure reflectance coefficients for every single pixel."}, {"start": 128.0, "end": 132.0, "text": " That sounds great, but what does that mean?"}, {"start": 132.0, "end": 140.0, "text": " What this should mean is that the reflections and specular highlights in its outputs are supposed to be much better."}, {"start": 140.0, "end": 144.0, "text": " These are view-dependent effects, so they are quite easy to find."}, {"start": 144.0, "end": 150.0, "text": " What you need to look for is things that change when we move the camera around."}, {"start": 150.0, "end": 151.0, "text": " Let's test it."}, {"start": 151.0, "end": 155.0, "text": " Remember, the input is just a sparse bunch of photos."}, {"start": 155.0, "end": 166.0, "text": " Here they are, and the AI is supposed to fill in the missing data and produce a smooth video with, hopefully, high quality specular highlights."}, {"start": 166.0, "end": 174.0, "text": " These are especially difficult to get right, because they change a great deal when we move the camera just a little."}, {"start": 174.0, "end": 179.0, "text": " Yet, look, the AI can still deal with them really well."}, {"start": 179.0, "end": 188.0, "text": " So, yes, this looks good. But we are experienced fellow scholars over here, so we'll put this method to the test."}, {"start": 188.0, "end": 196.0, "text": " Let's try a challenging test for rendering reflections by using this piece of ancient technology called a CD."}, {"start": 196.0, "end": 203.0, "text": " Not the best for data storage these days, but fantastic for testing rendering algorithms."}, {"start": 203.0, "end": 210.0, "text": " So, I would like to see two things. One is the environment reflected in the silvery regions."}, {"start": 210.0, "end": 214.0, "text": " Did we get it? Yes, checkmark."}, {"start": 214.0, "end": 221.0, "text": " And two, I would like to see the rainbow changing and bending as we move the camera around."}, {"start": 221.0, "end": 227.0, "text": " Let's see. Oh, look at how beautifully it has done it. I love it."}, {"start": 227.0, "end": 236.0, "text": " Checkmark. These specular objects are not just a CD thing. As you see here, they are absolutely everywhere around us."}, {"start": 236.0, "end": 239.0, "text": " Therefore, it is important to get it right for view synthesis."}, {"start": 239.0, "end": 255.0, "text": " And I must say, these results are very close to getting good enough where we can put on a VR headset and venture into what is essentially a bunch of photos, not even a video, and make it feel like we are really in the room."}, {"start": 255.0, "end": 265.0, "text": " Absolutely amazing. When I read the paper, I thought, well, all that's great, but we probably need to wait forever to get results like this."}, {"start": 265.0, "end": 277.0, "text": " And now, hold on to your papers because this technique is not only much better in terms of quality, no, no, it is more than a thousand times faster."}, {"start": 277.0, "end": 288.0, "text": " A thousand times. In one year, I tried to hold on to my papers, but I have to admit that I have failed and now they are flying about."}, {"start": 288.0, "end": 298.0, "text": " But that's not all. Not even close. Look, it can also decouple the camera movements from this view-dependent, specular effect."}, {"start": 298.0, "end": 309.0, "text": " So, what does that mean? It is like a thought experiment where we keep the camera stationary and let the shiny things change as if we moved around."}, {"start": 309.0, "end": 315.0, "text": " The paper also contains source code and a web demo that you can try yourself right now."}, {"start": 315.0, "end": 327.0, "text": " It reveals that we still have some more to go until the true real images can be reproduced, but my goodness, this is so much progress in just one paper."}, {"start": 327.0, "end": 340.0, "text": " Absolutely mind-blowing. Now, this technique also has its limitations beyond not being as good as the real photos, for instance, the reproduction of refracted thin structures."}, {"start": 340.0, "end": 354.0, "text": " As you see, not so good. But, just think about it. The fact that we need to make up these crazy scenes to be able to give it trouble is a true testament to how good this technique is."}, {"start": 354.0, "end": 357.0, "text": " And all this improvement in just one year."}, {"start": 357.0, "end": 362.0, "text": " What a time to be alive. This video has been supported by weights and biases."}, {"start": 362.0, "end": 377.0, "text": " Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together."}, {"start": 377.0, "end": 386.0, "text": " You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start."}, {"start": 386.0, "end": 388.0, "text": " And here it is."}, {"start": 388.0, "end": 397.0, "text": " Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more."}, {"start": 397.0, "end": 411.0, "text": " Make sure to visit them through wnb.me slash papers or just click the link in the video description. Thanks to weights and biases for their long standing support and for helping us make better videos for you."}, {"start": 411.0, "end": 438.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=8qeCjeJTnvI
Is This Simulation Wrong? 👕
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Fast Linking Numbers for Topology Verification of Loopy Structures " is available here: https://graphics.stanford.edu/papers/fastlinkingnumbers/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. We had a crazy simulation experiment in a previous episode that we called Ghosts and Chains. What is behind this interesting name? Here, you see Houdini's volume, the industry standard simulator for cloth and the number of other kinds of simulations. It is an absolutely amazing tool, but wait a second. Look, things are a little too busy here, and it is because artificial Ghost forces appeared in a simulation, even on a simple test case, with 35 chain links. And we discussed a new method and wondered whether it could deal with them. The answer was a resounding yes, no Ghost forces. Not only that, but it could deal with even longer chains, let's try 100 links. That works too. Now, we always say that two more papers down the line and the technique will be improved significantly. So here is the two-minute papers moment of truth. This is just one more paper down the line. And what are we simulating today? The physics of chains? No! Give me a break. This new technique helps us computing the physics of chain meals. Now we're talking. We quickly set the simulation frequency to 480 Hz, which means that we subdivide one second into 480 time instants and compute the interactions of all of these chain links. It looks something like this. This compute relatively quickly and looks great too. Until we use this new technique to verify this method and see this huge red blob, what does this mean? It means that over 4000 were destroyed during this simulation. So this new method helps us verify an already existing simulation. You experienced fellow scholars know that this is going to be a ton of fun. Starting flaws in already existing simulations, sign me up right now. Now, let's go and more than double the frequency of the simulation to 1200 Hz, that means we get more detailed computations. Does it make a difference? Let's see if it's any better. Well, it looks different, and it looks great, but we are scholars here just looking is not enough to tell if everything is in order. So what does the new technique say? Uh oh, still not there, look, 16 links are still gone. Okay, now let's go all out and double the simulation frequency again to 2400 Hz. And now the number of violations is, let's see, finally, zero. The new technique is able to tell us that this is the correct version of the simulation. Excellent. Let's see our next opponent. Well, this simulation looks fine until we take a closer look and yes, there are quite a few fade links here. And the fixed simulation. So, does the verification step take as long as the simulation? If so, does this make sense at all? Well, get this, this verification step takes only a fraction of a second for one frame. Let's do two more and then see if we can deal with the final boss simulation. Now we start twisting and some links are already breaking, but otherwise not too bad. And now, oh my, look at that. The simulation broke down completely. In this case, we already know that the verification also takes very little time. So next time, we don't have to pray that the simulation does not break, we only have to simulate up until this point. And when it unveils the first breakages, we can decide to just abort and retry with more detailed simulation. This saves us a ton of time and computation. So good. I love it. It also helps with disnitted cloth simulations, where otherwise it is very difficult to tell for the naked eye if something went wrong. But, not for this method. And now, let's try the final boss. A huge simulation with a thousand rubber bands. Let's see. Yes, the technique revealed that this has tons of failed links right from the start. And let's verify the more detailed version of the same simulation. Finally, this one worked perfectly. And not only that, but it also introduced my favorite guy who wants no part of this and is just sliding away. So from now on, with this tool, we can not only strike a better balance of computation time and quality for linkage-based simulations, but if something is about to break, we don't just have to pray for the simulation to finish successfully, we will know in advance if there are any issues. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.04, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir."}, {"start": 5.04, "end": 12.16, "text": " We had a crazy simulation experiment in a previous episode that we called Ghosts and Chains."}, {"start": 12.16, "end": 14.4, "text": " What is behind this interesting name?"}, {"start": 14.4, "end": 20.36, "text": " Here, you see Houdini's volume, the industry standard simulator for cloth and the number"}, {"start": 20.36, "end": 22.8, "text": " of other kinds of simulations."}, {"start": 22.8, "end": 27.64, "text": " It is an absolutely amazing tool, but wait a second."}, {"start": 27.64, "end": 34.84, "text": " Look, things are a little too busy here, and it is because artificial Ghost forces appeared"}, {"start": 34.84, "end": 41.04, "text": " in a simulation, even on a simple test case, with 35 chain links."}, {"start": 41.04, "end": 46.760000000000005, "text": " And we discussed a new method and wondered whether it could deal with them."}, {"start": 46.760000000000005, "end": 52.24, "text": " The answer was a resounding yes, no Ghost forces."}, {"start": 52.24, "end": 60.04, "text": " Not only that, but it could deal with even longer chains, let's try 100 links."}, {"start": 60.04, "end": 61.040000000000006, "text": " That works too."}, {"start": 61.040000000000006, "end": 66.6, "text": " Now, we always say that two more papers down the line and the technique will be improved"}, {"start": 66.6, "end": 68.2, "text": " significantly."}, {"start": 68.2, "end": 71.84, "text": " So here is the two-minute papers moment of truth."}, {"start": 71.84, "end": 74.88, "text": " This is just one more paper down the line."}, {"start": 74.88, "end": 77.4, "text": " And what are we simulating today?"}, {"start": 77.4, "end": 79.32000000000001, "text": " The physics of chains?"}, {"start": 79.32000000000001, "end": 80.64, "text": " No!"}, {"start": 80.64, "end": 81.80000000000001, "text": " Give me a break."}, {"start": 81.8, "end": 86.72, "text": " This new technique helps us computing the physics of chain meals."}, {"start": 86.72, "end": 88.2, "text": " Now we're talking."}, {"start": 88.2, "end": 95.6, "text": " We quickly set the simulation frequency to 480 Hz, which means that we subdivide one second"}, {"start": 95.6, "end": 102.72, "text": " into 480 time instants and compute the interactions of all of these chain links."}, {"start": 102.72, "end": 105.4, "text": " It looks something like this."}, {"start": 105.4, "end": 109.88, "text": " This compute relatively quickly and looks great too."}, {"start": 109.88, "end": 117.8, "text": " Until we use this new technique to verify this method and see this huge red blob, what"}, {"start": 117.8, "end": 119.44, "text": " does this mean?"}, {"start": 119.44, "end": 124.47999999999999, "text": " It means that over 4000 were destroyed during this simulation."}, {"start": 124.47999999999999, "end": 130.0, "text": " So this new method helps us verify an already existing simulation."}, {"start": 130.0, "end": 135.6, "text": " You experienced fellow scholars know that this is going to be a ton of fun."}, {"start": 135.6, "end": 140.51999999999998, "text": " Starting flaws in already existing simulations, sign me up right now."}, {"start": 140.51999999999998, "end": 147.28, "text": " Now, let's go and more than double the frequency of the simulation to 1200 Hz, that means we"}, {"start": 147.28, "end": 150.2, "text": " get more detailed computations."}, {"start": 150.2, "end": 152.32, "text": " Does it make a difference?"}, {"start": 152.32, "end": 154.68, "text": " Let's see if it's any better."}, {"start": 154.68, "end": 161.64, "text": " Well, it looks different, and it looks great, but we are scholars here just looking is"}, {"start": 161.64, "end": 165.28, "text": " not enough to tell if everything is in order."}, {"start": 165.28, "end": 168.2, "text": " So what does the new technique say?"}, {"start": 168.2, "end": 174.84, "text": " Uh oh, still not there, look, 16 links are still gone."}, {"start": 174.84, "end": 183.32, "text": " Okay, now let's go all out and double the simulation frequency again to 2400 Hz."}, {"start": 183.32, "end": 190.0, "text": " And now the number of violations is, let's see, finally, zero."}, {"start": 190.0, "end": 194.76, "text": " The new technique is able to tell us that this is the correct version of the simulation."}, {"start": 194.76, "end": 195.76, "text": " Excellent."}, {"start": 195.76, "end": 198.44, "text": " Let's see our next opponent."}, {"start": 198.44, "end": 207.72, "text": " Well, this simulation looks fine until we take a closer look and yes, there are quite"}, {"start": 207.72, "end": 211.44, "text": " a few fade links here."}, {"start": 211.44, "end": 213.23999999999998, "text": " And the fixed simulation."}, {"start": 213.23999999999998, "end": 218.51999999999998, "text": " So, does the verification step take as long as the simulation?"}, {"start": 218.51999999999998, "end": 221.2, "text": " If so, does this make sense at all?"}, {"start": 221.2, "end": 229.51999999999998, "text": " Well, get this, this verification step takes only a fraction of a second for one frame."}, {"start": 229.51999999999998, "end": 236.16, "text": " Let's do two more and then see if we can deal with the final boss simulation."}, {"start": 236.16, "end": 243.64, "text": " Now we start twisting and some links are already breaking, but otherwise not too bad."}, {"start": 243.64, "end": 248.92, "text": " And now, oh my, look at that."}, {"start": 248.92, "end": 251.79999999999998, "text": " The simulation broke down completely."}, {"start": 251.79999999999998, "end": 257.08, "text": " In this case, we already know that the verification also takes very little time."}, {"start": 257.08, "end": 262.32, "text": " So next time, we don't have to pray that the simulation does not break, we only have"}, {"start": 262.32, "end": 265.08, "text": " to simulate up until this point."}, {"start": 265.08, "end": 271.2, "text": " And when it unveils the first breakages, we can decide to just abort and retry with"}, {"start": 271.2, "end": 273.36, "text": " more detailed simulation."}, {"start": 273.36, "end": 277.48, "text": " This saves us a ton of time and computation."}, {"start": 277.48, "end": 278.48, "text": " So good."}, {"start": 278.48, "end": 279.96000000000004, "text": " I love it."}, {"start": 279.96000000000004, "end": 285.40000000000003, "text": " It also helps with disnitted cloth simulations, where otherwise it is very difficult to tell"}, {"start": 285.40000000000003, "end": 288.44, "text": " for the naked eye if something went wrong."}, {"start": 288.44, "end": 291.92, "text": " But, not for this method."}, {"start": 291.92, "end": 295.0, "text": " And now, let's try the final boss."}, {"start": 295.0, "end": 299.0, "text": " A huge simulation with a thousand rubber bands."}, {"start": 299.0, "end": 300.0, "text": " Let's see."}, {"start": 300.0, "end": 307.12, "text": " Yes, the technique revealed that this has tons of failed links right from the start."}, {"start": 307.12, "end": 311.12, "text": " And let's verify the more detailed version of the same simulation."}, {"start": 311.12, "end": 314.84000000000003, "text": " Finally, this one worked perfectly."}, {"start": 314.84000000000003, "end": 320.56, "text": " And not only that, but it also introduced my favorite guy who wants no part of this"}, {"start": 320.56, "end": 322.96, "text": " and is just sliding away."}, {"start": 322.96, "end": 328.16, "text": " So from now on, with this tool, we can not only strike a better balance of computation"}, {"start": 328.16, "end": 334.6, "text": " time and quality for linkage-based simulations, but if something is about to break, we don't"}, {"start": 334.6, "end": 340.36, "text": " just have to pray for the simulation to finish successfully, we will know in advance if"}, {"start": 340.36, "end": 342.6, "text": " there are any issues."}, {"start": 342.6, "end": 344.40000000000003, "text": " What a time to be alive."}, {"start": 344.40000000000003, "end": 347.84000000000003, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 347.84000000000003, "end": 353.8, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 353.8, "end": 360.8, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances."}, {"start": 360.8, "end": 367.36, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and"}, {"start": 367.36, "end": 368.36, "text": " Asia."}, {"start": 368.36, "end": 373.72, "text": " Plus they are the only Cloud service with 48GB RTX 8000."}, {"start": 373.72, "end": 380.12, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 380.12, "end": 381.92, "text": " workstations or servers."}, {"start": 381.92, "end": 387.88, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 387.88, "end": 388.88, "text": " today."}, {"start": 388.88, "end": 393.6, "text": " Thanks to Lambda for their long-standing support and for helping us make better videos for"}, {"start": 393.6, "end": 394.6, "text": " you."}, {"start": 394.6, "end": 422.04, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_8ExhGic_Co
DeepMind’s Robot Inserts A USB Stick! 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/stacey/yolo-drive/reports/Bounding-Boxes-for-Object-Detection--Vmlldzo4Nzg4MQ 📝 The paper "Scaling data-driven robotics with reward sketching and batch reinforcement learning" is available here: https://sites.google.com/view/data-driven-robotics/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir. Two years ago, in 2019, scientists at OpenAI published an outstanding paper that showcased a robot arm that could dexterously manipulate a Rubik's cube. And how good were the results? Well, nothing short of spectacular. It could not only manipulate and solve the cube, but they could even hamstring the hand in many different ways and it would still be able to do well. And I am telling you, scientists at OpenAI got very creative in tormenting this little hand. They added a rubber glove, tied multiple fingers together, threw a blanket on it, and pushed it around with a plush giraffe and a pen. It still worked, but probably had nightmares about that day in the lab. Robot nightmares, if you will. And now, let's see what is going on over deep-mind side and have a look at this work on manipulating objects. In this example, a human starts operating this contraption. And after that step is done, we leave it alone and our first question is, how do we tell this robot arm what it should do? It doesn't yet know what the task is, but we can tell it by this reward sketching step. Essentially, this works like a video game. Not in the classical setting, but the other way around. Here we are not playing the video game, but we are the video game. And the robot is the character that plays the video game in which we can provide it feedback and tell it when it is doing well or not. But then, from these rewards, it can learn what the task is. So, all that sounds great, but what do we get for all this work? Four things. One, we can instruct it to learn to lift deformables. This includes a piece of cloth, a rope, a soft ball, and more. Two, much like open AI's robot arm, it is robust against perturbations. In other words, it can recover from our evil machinations. Look, we can get creative here too. For instance, we can reorganize the objects on the table, nudge an already grabbed object out of its gripper, or simply just undo the stacks that are already done. This diligent little AI is not phased, it just keeps on trying and trying, and eventually it succeeds. A great life lesson right there. And three, it generalizes to new object types well, and does not get confused by different object colors and geometries. And now, hold on to your papers for number four, because here comes one of the most frustrating tasks for any intelligent being, something that not many humans can perform, inserting a USB key correctly on the first try. Can it do that? Well, does this count as a first try? I don't know for sure, but dear fellow scholars, the singularity is officially getting very, very close, especially given that we can even move the machine around a little, and it will still find the correct port. If only the first machine could pronounce my name correctly, then we would conclude that we have reached the singularity. But wait, we noted that first a human starts controlling the arm. How does the fully trained AI compare to this human's work? And here comes the best part. It learned to perform these tasks faster. Look, by the six second mark, the human started grabbing the green block, but by this time, the AI is already mid-air with it, and by nine seconds, it is done while the human is still at work. Excellent. And we get an 80% success rate on this task with only 8 hours of training. That is within a single working day. One of the key ideas here and the part that I liked best is that we can essentially reprogram the AI with the reward sketching step. And I wonder what else this could be used for if we did that. Do you have some ideas? Let me know in the comments below. This episode has been supported by weights and biases. In this post, they show you how to use their tool to draw bounding boxes for object detection and even more importantly, how to debug them. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehir."}, {"start": 4.72, "end": 13.44, "text": " Two years ago, in 2019, scientists at OpenAI published an outstanding paper that showcased a robot arm"}, {"start": 13.44, "end": 16.4, "text": " that could dexterously manipulate a Rubik's cube."}, {"start": 17.04, "end": 19.12, "text": " And how good were the results?"}, {"start": 19.76, "end": 22.240000000000002, "text": " Well, nothing short of spectacular."}, {"start": 22.8, "end": 26.16, "text": " It could not only manipulate and solve the cube,"}, {"start": 26.16, "end": 32.24, "text": " but they could even hamstring the hand in many different ways and it would still be able to do well."}, {"start": 33.12, "end": 40.16, "text": " And I am telling you, scientists at OpenAI got very creative in tormenting this little hand."}, {"start": 40.16, "end": 44.8, "text": " They added a rubber glove, tied multiple fingers together,"}, {"start": 45.760000000000005, "end": 50.96, "text": " threw a blanket on it, and pushed it around with a plush giraffe and a pen."}, {"start": 50.96, "end": 56.56, "text": " It still worked, but probably had nightmares about that day in the lab."}, {"start": 56.56, "end": 58.24, "text": " Robot nightmares, if you will."}, {"start": 58.24, "end": 66.16, "text": " And now, let's see what is going on over deep-mind side and have a look at this work on manipulating objects."}, {"start": 66.16, "end": 70.72, "text": " In this example, a human starts operating this contraption."}, {"start": 70.72, "end": 76.24000000000001, "text": " And after that step is done, we leave it alone and our first question is,"}, {"start": 76.24000000000001, "end": 79.84, "text": " how do we tell this robot arm what it should do?"}, {"start": 79.84, "end": 86.08, "text": " It doesn't yet know what the task is, but we can tell it by this reward sketching step."}, {"start": 86.88000000000001, "end": 89.44, "text": " Essentially, this works like a video game."}, {"start": 90.08, "end": 93.28, "text": " Not in the classical setting, but the other way around."}, {"start": 93.84, "end": 98.4, "text": " Here we are not playing the video game, but we are the video game."}, {"start": 98.88, "end": 104.16, "text": " And the robot is the character that plays the video game in which we can provide it feedback"}, {"start": 104.16, "end": 106.96000000000001, "text": " and tell it when it is doing well or not."}, {"start": 106.96, "end": 111.36, "text": " But then, from these rewards, it can learn what the task is."}, {"start": 112.0, "end": 116.08, "text": " So, all that sounds great, but what do we get for all this work?"}, {"start": 116.96, "end": 117.6, "text": " Four things."}, {"start": 118.32, "end": 122.16, "text": " One, we can instruct it to learn to lift deformables."}, {"start": 122.96, "end": 127.75999999999999, "text": " This includes a piece of cloth, a rope, a soft ball, and more."}, {"start": 128.48, "end": 133.28, "text": " Two, much like open AI's robot arm, it is robust against perturbations."}, {"start": 133.28, "end": 137.52, "text": " In other words, it can recover from our evil machinations."}, {"start": 137.52, "end": 140.24, "text": " Look, we can get creative here too."}, {"start": 140.24, "end": 143.76, "text": " For instance, we can reorganize the objects on the table,"}, {"start": 143.76, "end": 147.44, "text": " nudge an already grabbed object out of its gripper,"}, {"start": 147.44, "end": 151.52, "text": " or simply just undo the stacks that are already done."}, {"start": 151.52, "end": 157.12, "text": " This diligent little AI is not phased, it just keeps on trying and trying,"}, {"start": 157.12, "end": 159.52, "text": " and eventually it succeeds."}, {"start": 159.52, "end": 162.0, "text": " A great life lesson right there."}, {"start": 162.0, "end": 166.96, "text": " And three, it generalizes to new object types well,"}, {"start": 166.96, "end": 171.44, "text": " and does not get confused by different object colors and geometries."}, {"start": 173.44, "end": 176.72, "text": " And now, hold on to your papers for number four,"}, {"start": 176.72, "end": 181.76, "text": " because here comes one of the most frustrating tasks for any intelligent being,"}, {"start": 182.56, "end": 185.2, "text": " something that not many humans can perform,"}, {"start": 185.2, "end": 189.36, "text": " inserting a USB key correctly on the first try."}, {"start": 189.36, "end": 194.8, "text": " Can it do that? Well, does this count as a first try?"}, {"start": 195.52, "end": 199.04000000000002, "text": " I don't know for sure, but dear fellow scholars,"}, {"start": 199.04000000000002, "end": 202.64000000000001, "text": " the singularity is officially getting very, very close,"}, {"start": 203.36, "end": 207.12, "text": " especially given that we can even move the machine around a little,"}, {"start": 207.68, "end": 210.0, "text": " and it will still find the correct port."}, {"start": 210.64000000000001, "end": 214.64000000000001, "text": " If only the first machine could pronounce my name correctly,"}, {"start": 214.64000000000001, "end": 217.68, "text": " then we would conclude that we have reached the singularity."}, {"start": 217.68, "end": 222.88, "text": " But wait, we noted that first a human starts controlling the arm."}, {"start": 222.88, "end": 227.04000000000002, "text": " How does the fully trained AI compare to this human's work?"}, {"start": 227.04000000000002, "end": 229.6, "text": " And here comes the best part."}, {"start": 229.6, "end": 232.96, "text": " It learned to perform these tasks faster."}, {"start": 232.96, "end": 238.08, "text": " Look, by the six second mark, the human started grabbing the green block,"}, {"start": 238.08, "end": 242.4, "text": " but by this time, the AI is already mid-air with it,"}, {"start": 242.4, "end": 247.68, "text": " and by nine seconds, it is done while the human is still at work."}, {"start": 249.20000000000002, "end": 256.4, "text": " Excellent. And we get an 80% success rate on this task with only 8 hours of training."}, {"start": 257.12, "end": 259.36, "text": " That is within a single working day."}, {"start": 260.0, "end": 265.2, "text": " One of the key ideas here and the part that I liked best is that we can essentially"}, {"start": 265.2, "end": 268.32, "text": " reprogram the AI with the reward sketching step."}, {"start": 268.32, "end": 272.88, "text": " And I wonder what else this could be used for if we did that."}, {"start": 272.88, "end": 276.24, "text": " Do you have some ideas? Let me know in the comments below."}, {"start": 276.24, "end": 279.59999999999997, "text": " This episode has been supported by weights and biases."}, {"start": 279.59999999999997, "end": 285.28, "text": " In this post, they show you how to use their tool to draw bounding boxes for object detection"}, {"start": 285.28, "end": 288.0, "text": " and even more importantly, how to debug them."}, {"start": 288.0, "end": 292.8, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 292.8, "end": 296.4, "text": " Their system is designed to save you a ton of time and money,"}, {"start": 296.4, "end": 301.2, "text": " and it is actively used in projects at prestigious labs such as OpenAI,"}, {"start": 301.2, "end": 303.84, "text": " Toyota Research, GitHub, and more."}, {"start": 303.84, "end": 308.23999999999995, "text": " And the best part is that weights and biases is free for all individuals,"}, {"start": 308.23999999999995, "end": 310.56, "text": " academics, and open source projects."}, {"start": 310.56, "end": 313.2, "text": " It really is as good as it gets."}, {"start": 313.2, "end": 317.28, "text": " Make sure to visit them through wnb.com slash papers,"}, {"start": 317.28, "end": 319.91999999999996, "text": " or just click the link in the video description,"}, {"start": 319.91999999999996, "end": 322.08, "text": " and you can get a free demo today."}, {"start": 322.08, "end": 326.96, "text": " Our thanks to weights and biases for their long-standing support and for helping us make better"}, {"start": 326.96, "end": 351.91999999999996, "text": " videos for you."}]
Two Minute Papers
https://www.youtube.com/watch?v=0zaGYLPj4Kk
NVIDIA’s Face Generator AI: This Is The Next Level! 👩‍🔬
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Alias-Free GAN" is available here: https://nvlabs.github.io/alias-free-gan/ 📝 Our material synthesis paper is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia #stylegan3
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we will see how a small change to an already existing learning-based technique can result in a huge difference in its results. This is Stahlgantu, a technique that appeared in December 2019. It is a neural network-based learning algorithm that is capable of synthesizing these eye-poppingly detailed images of human beings that don't even exist. This is all synthetic. It also supports a cool feature where we can give it a photo, then it embeds this image into a latent space and in this space we can easily apply modifications to it. Okay, but what is this latent space thing? A latent space is a made up place where we are trying to organize data in a way that similar things are close to each other. In our earlier work, we were looking to generate hundreds of variants of a material model to populate this scene. In this latent space, we can concoct all of these really cool digital material models. A link to this work is available in the video description. Stahlgantu uses walks in a similar latent space to create these human faces and animate them. So, let's see that. When we take a walk in the internal latent space of this technique, we can generate animations. Let's see how Stahlgantu does this. It is a true miracle that a computer can generate images like this. However, wait a second. Look closely. Did you notice it? Something is not right here. Don't despair if not, it is hard to pin down what the exact problem is, but it is easy to see that there is some sort of flickering going on. So, what is the issue? Well, the issue is that there are landmarks, for instance, the beard which don't really or just barely move and essentially the face is being generated under it with these constraints. The authors refer to this problem as texture sticking. The AI suffers from a sticky beard, if you will. Imagine saying that 20 years ago to someone, you would end up in a madhouse. Now, this new paper from scientists at Nvidia promises a tiny, but important architectural change. And we will see if this issue, which seems like quite a limitation, can be solved with it or not. And now, hold on to your papers and let's see the new method. Holy Mother of Papers! Do you see what I see here? The sticky beards are a thing of the past and facial landmarks are allowed to fly about freely. And not only that, but the results are much smoother and more consistent to the point that it can not only generate photorealistic images of virtual humans. Come on, that is so 2020. This generates photorealistic videos of virtual humans. So, I wonder did the new technique also inherit the generality of Stargain 2? Let's see, we know that it works on real humans and now paintings and art pieces. Yes, excellent! And of course, cats and other animals as well. The small change that creates these beautiful results is what we call an aquivariant filter design, essentially this ensures that final details move together in the inner thinking of the neural network. This is an excellent lesson on how a small and carefully designed architectural change can have a huge effect on the results. If we look under the hood, we see that the inner representation of the new method is completely different from its predecessor. You see, the features are indeed allowed to fly about, and the new method even seems to have invented a coordinate system of sorts to be able to move these things around. What an incredible idea! These learning algorithms are getting better and better with every published paper. Now, good news. It is only marginally more expensive to train and run than Stargain 2, and the less good news is that training these huge neural networks still requires a great deal of computation. The silver lining is that if it has been trained once, it can be run inexpensively for as long as we wish. So, images of virtual humans might soon become a thing of the past because from now on, we can generate photorealistic videos of them. Absolutely amazing! What a time to be alive! This video has been supported by weights and biases. Check out the recent offering, fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is! Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 10.72, "text": " Today, we will see how a small change to an already existing learning-based technique"}, {"start": 10.72, "end": 14.16, "text": " can result in a huge difference in its results."}, {"start": 14.16, "end": 19.76, "text": " This is Stahlgantu, a technique that appeared in December 2019."}, {"start": 19.76, "end": 25.36, "text": " It is a neural network-based learning algorithm that is capable of synthesizing these"}, {"start": 25.36, "end": 31.28, "text": " eye-poppingly detailed images of human beings that don't even exist."}, {"start": 31.28, "end": 33.36, "text": " This is all synthetic."}, {"start": 33.36, "end": 39.56, "text": " It also supports a cool feature where we can give it a photo, then it embeds this image"}, {"start": 39.56, "end": 46.08, "text": " into a latent space and in this space we can easily apply modifications to it."}, {"start": 46.08, "end": 50.08, "text": " Okay, but what is this latent space thing?"}, {"start": 50.08, "end": 56.16, "text": " A latent space is a made up place where we are trying to organize data in a way that similar"}, {"start": 56.16, "end": 60.48, "text": " things are close to each other."}, {"start": 60.48, "end": 65.6, "text": " In our earlier work, we were looking to generate hundreds of variants of a material model to"}, {"start": 65.6, "end": 68.96, "text": " populate this scene."}, {"start": 68.96, "end": 74.56, "text": " In this latent space, we can concoct all of these really cool digital material models."}, {"start": 74.56, "end": 78.28, "text": " A link to this work is available in the video description."}, {"start": 78.28, "end": 84.6, "text": " Stahlgantu uses walks in a similar latent space to create these human faces and animate"}, {"start": 84.6, "end": 85.6, "text": " them."}, {"start": 85.6, "end": 88.64, "text": " So, let's see that."}, {"start": 88.64, "end": 94.6, "text": " When we take a walk in the internal latent space of this technique, we can generate animations."}, {"start": 94.6, "end": 97.44, "text": " Let's see how Stahlgantu does this."}, {"start": 97.44, "end": 102.88, "text": " It is a true miracle that a computer can generate images like this."}, {"start": 102.88, "end": 105.8, "text": " However, wait a second."}, {"start": 105.8, "end": 107.8, "text": " Look closely."}, {"start": 107.8, "end": 109.8, "text": " Did you notice it?"}, {"start": 109.8, "end": 111.88, "text": " Something is not right here."}, {"start": 111.88, "end": 117.6, "text": " Don't despair if not, it is hard to pin down what the exact problem is, but it is easy"}, {"start": 117.6, "end": 121.39999999999999, "text": " to see that there is some sort of flickering going on."}, {"start": 121.39999999999999, "end": 123.6, "text": " So, what is the issue?"}, {"start": 123.6, "end": 129.8, "text": " Well, the issue is that there are landmarks, for instance, the beard which don't really"}, {"start": 129.8, "end": 137.76, "text": " or just barely move and essentially the face is being generated under it with these constraints."}, {"start": 137.76, "end": 142.16, "text": " The authors refer to this problem as texture sticking."}, {"start": 142.16, "end": 145.88, "text": " The AI suffers from a sticky beard, if you will."}, {"start": 145.88, "end": 151.07999999999998, "text": " Imagine saying that 20 years ago to someone, you would end up in a madhouse."}, {"start": 151.07999999999998, "end": 158.56, "text": " Now, this new paper from scientists at Nvidia promises a tiny, but important architectural"}, {"start": 158.56, "end": 159.72, "text": " change."}, {"start": 159.72, "end": 164.51999999999998, "text": " And we will see if this issue, which seems like quite a limitation, can be solved with"}, {"start": 164.51999999999998, "end": 166.84, "text": " it or not."}, {"start": 166.84, "end": 171.56, "text": " And now, hold on to your papers and let's see the new method."}, {"start": 171.56, "end": 175.64000000000001, "text": " Holy Mother of Papers!"}, {"start": 175.64000000000001, "end": 178.24, "text": " Do you see what I see here?"}, {"start": 178.24, "end": 184.12, "text": " The sticky beards are a thing of the past and facial landmarks are allowed to fly about"}, {"start": 184.12, "end": 185.44, "text": " freely."}, {"start": 185.44, "end": 190.96, "text": " And not only that, but the results are much smoother and more consistent to the point that"}, {"start": 190.96, "end": 195.68, "text": " it can not only generate photorealistic images of virtual humans."}, {"start": 195.68, "end": 198.6, "text": " Come on, that is so 2020."}, {"start": 198.6, "end": 202.52, "text": " This generates photorealistic videos of virtual humans."}, {"start": 202.52, "end": 209.68, "text": " So, I wonder did the new technique also inherit the generality of Stargain 2?"}, {"start": 209.68, "end": 216.24, "text": " Let's see, we know that it works on real humans and now paintings and art pieces."}, {"start": 216.24, "end": 220.0, "text": " Yes, excellent!"}, {"start": 220.0, "end": 224.08, "text": " And of course, cats and other animals as well."}, {"start": 224.08, "end": 229.60000000000002, "text": " The small change that creates these beautiful results is what we call an aquivariant filter"}, {"start": 229.60000000000002, "end": 236.28, "text": " design, essentially this ensures that final details move together in the inner thinking"}, {"start": 236.28, "end": 238.24, "text": " of the neural network."}, {"start": 238.24, "end": 243.68, "text": " This is an excellent lesson on how a small and carefully designed architectural change"}, {"start": 243.68, "end": 247.36, "text": " can have a huge effect on the results."}, {"start": 247.36, "end": 252.20000000000002, "text": " If we look under the hood, we see that the inner representation of the new method is"}, {"start": 252.2, "end": 255.48, "text": " completely different from its predecessor."}, {"start": 255.48, "end": 262.88, "text": " You see, the features are indeed allowed to fly about, and the new method even seems to"}, {"start": 262.88, "end": 268.91999999999996, "text": " have invented a coordinate system of sorts to be able to move these things around."}, {"start": 268.91999999999996, "end": 270.96, "text": " What an incredible idea!"}, {"start": 270.96, "end": 275.76, "text": " These learning algorithms are getting better and better with every published paper."}, {"start": 275.76, "end": 277.91999999999996, "text": " Now, good news."}, {"start": 277.92, "end": 284.04, "text": " It is only marginally more expensive to train and run than Stargain 2, and the less good"}, {"start": 284.04, "end": 290.28000000000003, "text": " news is that training these huge neural networks still requires a great deal of computation."}, {"start": 290.28000000000003, "end": 296.0, "text": " The silver lining is that if it has been trained once, it can be run inexpensively for as"}, {"start": 296.0, "end": 297.76, "text": " long as we wish."}, {"start": 297.76, "end": 304.12, "text": " So, images of virtual humans might soon become a thing of the past because from now on,"}, {"start": 304.12, "end": 307.48, "text": " we can generate photorealistic videos of them."}, {"start": 307.48, "end": 309.24, "text": " Absolutely amazing!"}, {"start": 309.24, "end": 311.12, "text": " What a time to be alive!"}, {"start": 311.12, "end": 314.32, "text": " This video has been supported by weights and biases."}, {"start": 314.32, "end": 319.40000000000003, "text": " Check out the recent offering, fully connected, a place where they bring machine learning"}, {"start": 319.40000000000003, "end": 326.20000000000005, "text": " practitioners together to share and discuss their ideas, learn from industry leaders, and"}, {"start": 326.20000000000005, "end": 329.0, "text": " even collaborate on projects together."}, {"start": 329.0, "end": 334.0, "text": " You see, I get messages from you fellow scholars telling me that you have been inspired by the"}, {"start": 334.0, "end": 338.4, "text": " series, but don't really know where to start."}, {"start": 338.4, "end": 339.92, "text": " And here it is!"}, {"start": 339.92, "end": 345.6, "text": " Fully connected is a great way to learn about the fundamentals, how to reproduce experiments,"}, {"start": 345.6, "end": 349.48, "text": " get your papers accepted to a conference, and more."}, {"start": 349.48, "end": 356.72, "text": " Make sure to visit them through wnb.me slash papers or just click the link in the video description."}, {"start": 356.72, "end": 361.8, "text": " Our thanks to weights and biases for their longstanding support and for helping us make"}, {"start": 361.8, "end": 363.36, "text": " better videos for you."}, {"start": 363.36, "end": 367.44, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=1F-WnarzkX8
Neural Materials Are Amazing! 🔮
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/stacey/xray/reports/X-Ray-Illumination--Vmlldzo4MzA5MQ 📝 The paper "NeuMIP: Multi-Resolution Neural Materials" is available here: https://cseweb.ucsd.edu/~viscomp/projects/NeuMIP/ 📝 Our latent space technique: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 📝 Our “Photoshop” technique: https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-editing/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Ejorna Ifehir. This image is the result of a light simulation program created by research scientists. It looks absolutely beautiful, but the light simulation algorithm is only part of the recipe here. To create something like this, we also need a good artist who can produce high quality geometry, lighting, and of course, good, lifelike material models. For instance, without the material's part, we would see something like this. Not very exciting, right? Previously, we introduced a technique that learns our preferences and helps filling these scenes with materials. This work can also generate variants of the same materials as well. In a later technique, we could even take a sample image, completely destroy it in Photoshop, and our neural networks would find a photorealistic material that matches these crazy ideas. Links to both of these works are available in the video description. And to improve these digital materials, this new paper introduces something that the authors call a multi-resolution neural material representation. What is that? Well, it is something that is able to put amazingly complex material models in our light transport programs, and not only that, but... Oh my! Look at that! We can even zoom in so far that we see the snagged threads. That is the magic of the multi-resolution part of the technique. The neural part means that the technique looks at lots of measured material reflectance data. This is what describes a real-world material, and compresses this description down into a representation that is manageable. Okay, why? Well, look, here is a reference material. You see, these are absolutely beautiful, no doubt, but are often prohibitively expensive to store directly. This new method introduces these neural materials to approximate the real-world materials, but in a way that is super cheap to compute and store. So, our first question is, how do these neural materials compare to these real reference materials? What do you think? How much worse equality do we have to expect to be able to use these in our rendering systems? Well, you tell me because you are already looking at the new technique right now. I quickly switched from the reference to the result with the new method already. How cool is that? Look, this was the expensive reference material, and this is the fast neural material counterpart for it. So, how hard is this to pull off? Well, let's look at some results side by side. Here is the reference, and here are two techniques from one and two years ago that try to approximate it. And you see that if we zoom in real close, these fine details are gone. Do we have to live with that? Or, maybe, can the new method do better? Hold on to your papers, and let's see. Wow! While it is not 100% perfect, there is absolutely no contest compared to the previous methods. It outperforms them handily in every single case of these complex materials I came across. And when I say complex materials, I really mean it. Look at how beautifully it captures not only the texture of this piece of embroidery, but when we move the light source around, oh wow! Look at the area here, around the vertical black stripe, and how its specular reflections change with the lighting. And note that none of these are real images, all of them come from a computer program. This is truly something else, loving it. So, if it really works so well, where is the catch? Does it only work on cloth-like materials? No, no, not in the slightest. It also works really well on rocks, insulation foam, even turtle shells, and a variety of other materials. The paper contains a ton more examples than we can showcase here, so make sure to have a look in the video description. I guess this means that it requires a huge and expensive neural network to pull off, right? Well, let's have a look. Whoa! Now that's something. It does not require a deep and heavy-duty neural network, just four layers are enough. And this, by today's standard, is a lightweight network that can take these expensive reference materials and compress them down in a matter of milliseconds. And they almost look the same. Materials in our computer simulations straight from reality, yes please. So, from now on, we will get cheaper and better material models for animation movies, computer games, and visualization applications. Sign me up right now. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their tool to analyze chest x-rays and more. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Ejorna Ifehir."}, {"start": 5.0, "end": 11.0, "text": " This image is the result of a light simulation program created by research scientists."}, {"start": 11.0, "end": 18.0, "text": " It looks absolutely beautiful, but the light simulation algorithm is only part of the recipe here."}, {"start": 18.0, "end": 30.0, "text": " To create something like this, we also need a good artist who can produce high quality geometry, lighting, and of course, good, lifelike material models."}, {"start": 30.0, "end": 35.0, "text": " For instance, without the material's part, we would see something like this."}, {"start": 35.0, "end": 37.0, "text": " Not very exciting, right?"}, {"start": 37.0, "end": 45.0, "text": " Previously, we introduced a technique that learns our preferences and helps filling these scenes with materials."}, {"start": 45.0, "end": 51.0, "text": " This work can also generate variants of the same materials as well."}, {"start": 53.0, "end": 68.0, "text": " In a later technique, we could even take a sample image, completely destroy it in Photoshop, and our neural networks would find a photorealistic material that matches these crazy ideas."}, {"start": 68.0, "end": 83.0, "text": " Links to both of these works are available in the video description. And to improve these digital materials, this new paper introduces something that the authors call a multi-resolution neural material representation."}, {"start": 83.0, "end": 85.0, "text": " What is that?"}, {"start": 85.0, "end": 95.0, "text": " Well, it is something that is able to put amazingly complex material models in our light transport programs, and not only that, but..."}, {"start": 95.0, "end": 104.0, "text": " Oh my! Look at that! We can even zoom in so far that we see the snagged threads."}, {"start": 104.0, "end": 108.0, "text": " That is the magic of the multi-resolution part of the technique."}, {"start": 108.0, "end": 115.0, "text": " The neural part means that the technique looks at lots of measured material reflectance data."}, {"start": 115.0, "end": 124.0, "text": " This is what describes a real-world material, and compresses this description down into a representation that is manageable."}, {"start": 124.0, "end": 139.0, "text": " Okay, why? Well, look, here is a reference material. You see, these are absolutely beautiful, no doubt, but are often prohibitively expensive to store directly."}, {"start": 139.0, "end": 150.0, "text": " This new method introduces these neural materials to approximate the real-world materials, but in a way that is super cheap to compute and store."}, {"start": 150.0, "end": 157.0, "text": " So, our first question is, how do these neural materials compare to these real reference materials?"}, {"start": 157.0, "end": 165.0, "text": " What do you think? How much worse equality do we have to expect to be able to use these in our rendering systems?"}, {"start": 165.0, "end": 175.0, "text": " Well, you tell me because you are already looking at the new technique right now. I quickly switched from the reference to the result with the new method already."}, {"start": 175.0, "end": 188.0, "text": " How cool is that? Look, this was the expensive reference material, and this is the fast neural material counterpart for it."}, {"start": 188.0, "end": 202.0, "text": " So, how hard is this to pull off? Well, let's look at some results side by side. Here is the reference, and here are two techniques from one and two years ago that try to approximate it."}, {"start": 202.0, "end": 208.0, "text": " And you see that if we zoom in real close, these fine details are gone."}, {"start": 208.0, "end": 217.0, "text": " Do we have to live with that? Or, maybe, can the new method do better? Hold on to your papers, and let's see."}, {"start": 217.0, "end": 226.0, "text": " Wow! While it is not 100% perfect, there is absolutely no contest compared to the previous methods."}, {"start": 226.0, "end": 232.0, "text": " It outperforms them handily in every single case of these complex materials I came across."}, {"start": 232.0, "end": 246.0, "text": " And when I say complex materials, I really mean it. Look at how beautifully it captures not only the texture of this piece of embroidery, but when we move the light source around, oh wow!"}, {"start": 246.0, "end": 254.0, "text": " Look at the area here, around the vertical black stripe, and how its specular reflections change with the lighting."}, {"start": 254.0, "end": 260.0, "text": " And note that none of these are real images, all of them come from a computer program."}, {"start": 260.0, "end": 264.0, "text": " This is truly something else, loving it."}, {"start": 264.0, "end": 271.0, "text": " So, if it really works so well, where is the catch? Does it only work on cloth-like materials?"}, {"start": 271.0, "end": 283.0, "text": " No, no, not in the slightest. It also works really well on rocks, insulation foam, even turtle shells, and a variety of other materials."}, {"start": 283.0, "end": 291.0, "text": " The paper contains a ton more examples than we can showcase here, so make sure to have a look in the video description."}, {"start": 291.0, "end": 298.0, "text": " I guess this means that it requires a huge and expensive neural network to pull off, right?"}, {"start": 298.0, "end": 300.0, "text": " Well, let's have a look."}, {"start": 300.0, "end": 310.0, "text": " Whoa! Now that's something. It does not require a deep and heavy-duty neural network, just four layers are enough."}, {"start": 310.0, "end": 321.0, "text": " And this, by today's standard, is a lightweight network that can take these expensive reference materials and compress them down in a matter of milliseconds."}, {"start": 321.0, "end": 329.0, "text": " And they almost look the same. Materials in our computer simulations straight from reality, yes please."}, {"start": 329.0, "end": 338.0, "text": " So, from now on, we will get cheaper and better material models for animation movies, computer games, and visualization applications."}, {"start": 338.0, "end": 342.0, "text": " Sign me up right now. What a time to be alive!"}, {"start": 342.0, "end": 351.0, "text": " This episode has been supported by weights and biases. In this post, they show you how to use their tool to analyze chest x-rays and more."}, {"start": 351.0, "end": 357.0, "text": " If you work with learning algorithms on a regular basis, make sure to check out weights and biases."}, {"start": 357.0, "end": 366.0, "text": " Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects,"}, {"start": 366.0, "end": 371.0, "text": " and is completely free for all individuals, academics, and open source projects."}, {"start": 371.0, "end": 380.0, "text": " This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions."}, {"start": 380.0, "end": 389.0, "text": " Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today."}, {"start": 389.0, "end": 395.0, "text": " Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you."}, {"start": 395.0, "end": 399.0, "text": " Thanks for watching, and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=VqeNSZqiBzc
A Simulation That Looks Like Reality! 🤯
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Solid-Fluid Interaction with Surface-Tension-Dominant Contact" is available here: https://cs.dartmouth.edu/~bozhu/papers/surface_tension_sfi.pdf https://arxiv.org/abs/2105.08471 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Thumbnail background design: http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajon A. Fahir. I am stunned by this no-graphics paper that promises to simulate 3 way coupling and enables beautiful surface tangent simulations like this and this and more. Yes, none of this is real footage. These are all simulated on a computer. I have seen quite a few simulations and I am still baffled by this. How is this even possible? Also 3 way coupling, eh? That is quite peculiar to the point that the term doesn't even sound real. Let's find out why together. So what does that mean exactly? Well, first let's have a look at one way coupling. As the box moves here, it has an effect on the smoke plume around it. This example also showcases one way coupling where the falling plate stirs up the smoke around it. And now on to 2 way coupling. In this case, similarly to previous ones, the boxes are allowed to move the smoke, but the added 2 way coupling part means that now the smoke is also allowed to blow away the boxes. What's more, the vertices here on the right were even able to suspend the red box in the air for a few seconds, an excellent demonstration of a beautiful phenomenon. So coupling means interaction between different kinds of objects and 2 way coupling seems like the real deal. Here it is also required to compute how this fiery smoke trail propels the rocket upward. First way, we just mentioned that the new method performs 3 way coupling. 2 way was solid fluid interactions and it seemed absolutely amazing, so what is the third element then? And why do we even need that? Well, depending on what object is in contact with the liquid, gravity, buoyancy and surface tension forces need additional considerations. To be able to do this, now look carefully. Yes, there is the third element, it simulates this thin liquid membrane tool which is in interaction with the solid and the fluid at the same time. And with that, please meet 3 way coupling. So what can it do? It can simulate this paperclip floating on water. That is quite remarkable because the density of the paperclip is 8 times as much as the water itself and yet it still sits on top of the water. But, how is that possible? It's specially given that gravity wants to constantly pull down a solid object. Well, it has 2 formidable opponents, 2 forces that try to counteract it, 1 is buoyancy, which is an upward force and 2 the capillary force which is a consequence of the formation of a thin membrane. If these 2 friends are as strong as gravity, the object will float. But, this balance is very delicate, for instance, in the case of milk and cherries, this happens. And during that time, the simulator creates a beautiful, bent liquid surface that is truly a sight to behold. Once again, all of this footage is simulated on a computer. The fact that this new work can simulate these 3 physical systems is a true miracle. Absolutely incredible. Now, if you have been holding onto your paper so far, squeeze that paper because we will now do my favorite thing in any simulation paper and that is when we let reality be our judge and compare the simulated results to real footage. This is a photograph. And now comes the simulation. Whoa! I have to say, if no one told me which is which, I might not be able to tell. And I am delighted to know and by this fact so much so that I had to ask the authors to double check if this really is a simulation and they managed to reproduce the illumination of these scenes perfectly. Yes, they did. Fantastic attention to detail. Very impressive. So how long do we have to wait for all this? For a two-dimensional scene, it pretty much runs interactively. That is, great news. And we are firmly in the second-per-frame region for the 3D scenes, but look, the boat and leave scene runs in less than two seconds per time step. That is absolutely amazing. Not real time because one frame contains several time steps, but why would it be real time? This is the kind of paper that makes something previously impossible, possible, and it even does that swiftly. I would wager we are just one or at most two more papers away from getting this in real time. This is unbelievable progress in just one paper. And all handcrafted, no learning algorithms anywhere to be seen. Huge congratulations to the authors. What a time to be alive. Our Saptilabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajon A. Fahir."}, {"start": 4.8, "end": 11.52, "text": " I am stunned by this no-graphics paper that promises to simulate 3 way coupling and enables"}, {"start": 11.52, "end": 20.92, "text": " beautiful surface tangent simulations like this and this and more."}, {"start": 20.92, "end": 23.96, "text": " Yes, none of this is real footage."}, {"start": 23.96, "end": 26.76, "text": " These are all simulated on a computer."}, {"start": 26.76, "end": 31.560000000000002, "text": " I have seen quite a few simulations and I am still baffled by this."}, {"start": 31.560000000000002, "end": 33.88, "text": " How is this even possible?"}, {"start": 33.88, "end": 36.28, "text": " Also 3 way coupling, eh?"}, {"start": 36.28, "end": 41.68, "text": " That is quite peculiar to the point that the term doesn't even sound real."}, {"start": 41.68, "end": 43.8, "text": " Let's find out why together."}, {"start": 43.8, "end": 46.32, "text": " So what does that mean exactly?"}, {"start": 46.32, "end": 50.040000000000006, "text": " Well, first let's have a look at one way coupling."}, {"start": 50.04, "end": 57.04, "text": " As the box moves here, it has an effect on the smoke plume around it."}, {"start": 57.04, "end": 62.96, "text": " This example also showcases one way coupling where the falling plate stirs up the smoke around"}, {"start": 62.96, "end": 63.96, "text": " it."}, {"start": 63.96, "end": 66.96000000000001, "text": " And now on to 2 way coupling."}, {"start": 66.96000000000001, "end": 73.0, "text": " In this case, similarly to previous ones, the boxes are allowed to move the smoke, but"}, {"start": 73.0, "end": 79.68, "text": " the added 2 way coupling part means that now the smoke is also allowed to blow away the"}, {"start": 79.68, "end": 80.68, "text": " boxes."}, {"start": 80.68, "end": 86.96000000000001, "text": " What's more, the vertices here on the right were even able to suspend the red box in"}, {"start": 86.96000000000001, "end": 93.52000000000001, "text": " the air for a few seconds, an excellent demonstration of a beautiful phenomenon."}, {"start": 93.52000000000001, "end": 100.60000000000001, "text": " So coupling means interaction between different kinds of objects and 2 way coupling seems"}, {"start": 100.60000000000001, "end": 102.52000000000001, "text": " like the real deal."}, {"start": 102.52000000000001, "end": 109.52000000000001, "text": " Here it is also required to compute how this fiery smoke trail propels the rocket upward."}, {"start": 109.52, "end": 114.67999999999999, "text": " First way, we just mentioned that the new method performs 3 way coupling."}, {"start": 114.67999999999999, "end": 120.92, "text": " 2 way was solid fluid interactions and it seemed absolutely amazing, so what is the third"}, {"start": 120.92, "end": 122.64, "text": " element then?"}, {"start": 122.64, "end": 124.88, "text": " And why do we even need that?"}, {"start": 124.88, "end": 132.0, "text": " Well, depending on what object is in contact with the liquid, gravity, buoyancy and surface"}, {"start": 132.0, "end": 135.88, "text": " tension forces need additional considerations."}, {"start": 135.88, "end": 139.56, "text": " To be able to do this, now look carefully."}, {"start": 139.56, "end": 146.04, "text": " Yes, there is the third element, it simulates this thin liquid membrane tool which is in"}, {"start": 146.04, "end": 150.79999999999998, "text": " interaction with the solid and the fluid at the same time."}, {"start": 150.79999999999998, "end": 154.04, "text": " And with that, please meet 3 way coupling."}, {"start": 154.04, "end": 156.07999999999998, "text": " So what can it do?"}, {"start": 156.07999999999998, "end": 160.35999999999999, "text": " It can simulate this paperclip floating on water."}, {"start": 160.35999999999999, "end": 165.84, "text": " That is quite remarkable because the density of the paperclip is 8 times as much as"}, {"start": 165.84, "end": 170.8, "text": " the water itself and yet it still sits on top of the water."}, {"start": 170.8, "end": 173.20000000000002, "text": " But, how is that possible?"}, {"start": 173.20000000000002, "end": 178.8, "text": " It's specially given that gravity wants to constantly pull down a solid object."}, {"start": 178.8, "end": 185.96, "text": " Well, it has 2 formidable opponents, 2 forces that try to counteract it, 1 is buoyancy,"}, {"start": 185.96, "end": 192.08, "text": " which is an upward force and 2 the capillary force which is a consequence of the formation"}, {"start": 192.08, "end": 193.92000000000002, "text": " of a thin membrane."}, {"start": 193.92, "end": 199.0, "text": " If these 2 friends are as strong as gravity, the object will float."}, {"start": 199.0, "end": 207.48, "text": " But, this balance is very delicate, for instance, in the case of milk and cherries, this happens."}, {"start": 207.48, "end": 213.79999999999998, "text": " And during that time, the simulator creates a beautiful, bent liquid surface that is truly"}, {"start": 213.79999999999998, "end": 216.44, "text": " a sight to behold."}, {"start": 216.44, "end": 220.56, "text": " Once again, all of this footage is simulated on a computer."}, {"start": 220.56, "end": 226.64000000000001, "text": " The fact that this new work can simulate these 3 physical systems is a true miracle."}, {"start": 226.64000000000001, "end": 228.68, "text": " Absolutely incredible."}, {"start": 228.68, "end": 233.92000000000002, "text": " Now, if you have been holding onto your paper so far, squeeze that paper because we will"}, {"start": 233.92000000000002, "end": 240.56, "text": " now do my favorite thing in any simulation paper and that is when we let reality be our"}, {"start": 240.56, "end": 245.56, "text": " judge and compare the simulated results to real footage."}, {"start": 245.56, "end": 247.56, "text": " This is a photograph."}, {"start": 247.56, "end": 250.88, "text": " And now comes the simulation."}, {"start": 250.88, "end": 251.88, "text": " Whoa!"}, {"start": 251.88, "end": 257.6, "text": " I have to say, if no one told me which is which, I might not be able to tell."}, {"start": 257.6, "end": 263.72, "text": " And I am delighted to know and by this fact so much so that I had to ask the authors to"}, {"start": 263.72, "end": 269.2, "text": " double check if this really is a simulation and they managed to reproduce the illumination"}, {"start": 269.2, "end": 271.4, "text": " of these scenes perfectly."}, {"start": 271.4, "end": 273.64, "text": " Yes, they did."}, {"start": 273.64, "end": 275.92, "text": " Fantastic attention to detail."}, {"start": 275.92, "end": 277.52, "text": " Very impressive."}, {"start": 277.52, "end": 280.76, "text": " So how long do we have to wait for all this?"}, {"start": 280.76, "end": 285.24, "text": " For a two-dimensional scene, it pretty much runs interactively."}, {"start": 285.24, "end": 287.32, "text": " That is, great news."}, {"start": 287.32, "end": 293.24, "text": " And we are firmly in the second-per-frame region for the 3D scenes, but look, the boat and"}, {"start": 293.24, "end": 298.12, "text": " leave scene runs in less than two seconds per time step."}, {"start": 298.12, "end": 300.76, "text": " That is absolutely amazing."}, {"start": 300.76, "end": 307.12, "text": " Not real time because one frame contains several time steps, but why would it be real time?"}, {"start": 307.12, "end": 313.0, "text": " This is the kind of paper that makes something previously impossible, possible, and it even"}, {"start": 313.0, "end": 315.12, "text": " does that swiftly."}, {"start": 315.12, "end": 321.32, "text": " I would wager we are just one or at most two more papers away from getting this in real"}, {"start": 321.32, "end": 322.32, "text": " time."}, {"start": 322.32, "end": 327.04, "text": " This is unbelievable progress in just one paper."}, {"start": 327.04, "end": 332.24, "text": " And all handcrafted, no learning algorithms anywhere to be seen."}, {"start": 332.24, "end": 334.64, "text": " Huge congratulations to the authors."}, {"start": 334.64, "end": 336.44, "text": " What a time to be alive."}, {"start": 336.44, "end": 341.68, "text": " Our Saptilabs is a visual API for TensorFlow carefully designed to make machine learning"}, {"start": 341.68, "end": 344.04, "text": " as intuitive as possible."}, {"start": 344.04, "end": 348.96, "text": " This gives you a faster way to build out models with more transparency into how your model"}, {"start": 348.96, "end": 353.2, "text": " is architected, how it performs, and how to debug it."}, {"start": 353.2, "end": 357.88, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 357.88, "end": 362.96, "text": " It even generates visualizations for all the model variables and gives you recommendations"}, {"start": 362.96, "end": 367.59999999999997, "text": " both during modeling and training and does all this automatically."}, {"start": 367.59999999999997, "end": 372.2, "text": " I only wish I had a tool like this when I was working on my neural networks during my"}, {"start": 372.2, "end": 373.68, "text": " PhD years."}, {"start": 373.68, "end": 379.44, "text": " Visit perceptilabs.com slash papers to easily install the free local version of their system"}, {"start": 379.44, "end": 380.44, "text": " today."}, {"start": 380.44, "end": 385.0, "text": " Our thanks to perceptilabs for their support and for helping us make better videos for"}, {"start": 385.0, "end": 386.0, "text": " you."}, {"start": 386.0, "end": 396.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=BpApq2EPDXE
This Magical AI Makes Your Photos Move! 🤳
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd  📝 The paper "Endless Loops: Detecting and Animating Periodic Patterns in Still Images" and the app are available here: https://pub.res.lightricks.com/endless-loops/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit - Pascal Wiemers: https://pixabay.com/images/id-792193/ Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajol Naifahir. Look at this video of a moving escalator. Nothing too crazy going on, only the escalator is moving. And I am wondering, would it be possible to not record a video for this, just an image, and have one of these amazing learning-based algorithms animated? Well, that is easier said than done. Look, this is what was possible with the research work from two years ago, but the results are? Well, what you see here. So how about a method from one year ago? This is the result, a great deal of improvement, but the water is not animated in this region and is generally all around the place, and we still have a lot of artifacts around the fence. And now hold on to your papers and let's see this new method, and… Whoa! Look at that! What an improvement! Apparently, we can now give this one a still image, and for the things that should move, it makes them move. It is still not perfect by any means, but this is so much progress in just two years. And there's more, and get this for the things that shouldn't move, it even imagines how they should move. It works on this building really well, but it also imagines how my tie would move around, or my beard, which is not mine by the way, but was made by a different AI, or the windows. Thank you very much for the authors of the paper for generating these results only for us. And this can lead to really cool artistic effects, for instance, this moving brick wall, or animating the stairs here, loving it. So how does this work exactly? Does it know what regions to animate? No it doesn't, and it shouldn't. We can specify that ourselves by using a brush to highlight the region that we wish to see animated, and we also have a little more artistic control over the results by prescribing a direction in which things should go. And it appears to work on a really wide variety of images, which is only one of its most appealing features. Here are some of my favorite results. I particularly love the one with the upper rotation here. Very impressive. Now, let's compare it to the previous method from just one year ago, and let's see what the numbers say. Well, they say that the previous method performs better on fluid elements than the new one. My experience is that it indeed works better on specialized cases, like this fire texture, but on many water images, they perform roughly equivalently. Both are doing quite well. So is the new one really better? Well, here comes the interesting part. When presented with a diverse set of images, look, there is no contest here. The previous one creates no results, incorrect results, or if it does something, the new technique almost always comes out way better. Not only that, but let's see what the execution time looks like for the new method. How much do we have to wait for these results? The one from last year took 20 seconds per image and required a big honking graphics card, while the new one only needs your smartphone and runs in… what? Just one second. Loving it. So what images would you try with this one? Let me know in the comments below. Well, in fact, you don't need to just think about what you would try, because you can try this yourself. The link is available in the video description. Make sure to let me know in the comments below if you had some success with it. And here comes the even more interesting part. The previous method was using a learning based algorithm while this one is a bona fide almost completely handcrafted technique. Partly, because training neural networks requires a great deal of training data, and there are very few, if any, training examples for moving buildings and these other surreal phenomena. Ingenious. Huge congratulations to the authors for pulling this off. Now, of course, not even this technique is perfect, there are still cases where it does not create appealing results. However, since it only takes a second to compute, we can easily retry with a different pixel mask or direction and see if it does better. And just imagine what we will be able to do two more papers down the line. What a time to be alive. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning based algorithms to solve real world problems. They have discussed biology, teaching robots, machine learning in outer space and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wmb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.18, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajol Naifahir."}, {"start": 5.18, "end": 8.78, "text": " Look at this video of a moving escalator."}, {"start": 8.78, "end": 13.040000000000001, "text": " Nothing too crazy going on, only the escalator is moving."}, {"start": 13.040000000000001, "end": 19.28, "text": " And I am wondering, would it be possible to not record a video for this, just an image,"}, {"start": 19.28, "end": 23.8, "text": " and have one of these amazing learning-based algorithms animated?"}, {"start": 23.8, "end": 27.080000000000002, "text": " Well, that is easier said than done."}, {"start": 27.08, "end": 32.239999999999995, "text": " Look, this is what was possible with the research work from two years ago, but the results"}, {"start": 32.239999999999995, "end": 33.239999999999995, "text": " are?"}, {"start": 33.239999999999995, "end": 36.519999999999996, "text": " Well, what you see here."}, {"start": 36.519999999999996, "end": 39.48, "text": " So how about a method from one year ago?"}, {"start": 39.48, "end": 46.8, "text": " This is the result, a great deal of improvement, but the water is not animated in this region"}, {"start": 46.8, "end": 52.76, "text": " and is generally all around the place, and we still have a lot of artifacts around"}, {"start": 52.76, "end": 55.0, "text": " the fence."}, {"start": 55.0, "end": 59.16, "text": " And now hold on to your papers and let's see this new method, and\u2026"}, {"start": 59.16, "end": 62.16, "text": " Whoa!"}, {"start": 62.16, "end": 64.12, "text": " Look at that!"}, {"start": 64.12, "end": 66.12, "text": " What an improvement!"}, {"start": 66.12, "end": 72.72, "text": " Apparently, we can now give this one a still image, and for the things that should move,"}, {"start": 72.72, "end": 74.48, "text": " it makes them move."}, {"start": 74.48, "end": 81.16, "text": " It is still not perfect by any means, but this is so much progress in just two years."}, {"start": 81.16, "end": 87.39999999999999, "text": " And there's more, and get this for the things that shouldn't move, it even imagines how"}, {"start": 87.39999999999999, "end": 89.11999999999999, "text": " they should move."}, {"start": 89.11999999999999, "end": 98.19999999999999, "text": " It works on this building really well, but it also imagines how my tie would move around,"}, {"start": 98.19999999999999, "end": 106.24, "text": " or my beard, which is not mine by the way, but was made by a different AI, or the windows."}, {"start": 106.24, "end": 112.64, "text": " Thank you very much for the authors of the paper for generating these results only for us."}, {"start": 112.64, "end": 119.0, "text": " And this can lead to really cool artistic effects, for instance, this moving brick wall,"}, {"start": 119.0, "end": 124.08, "text": " or animating the stairs here, loving it."}, {"start": 124.08, "end": 126.96, "text": " So how does this work exactly?"}, {"start": 126.96, "end": 129.88, "text": " Does it know what regions to animate?"}, {"start": 129.88, "end": 132.56, "text": " No it doesn't, and it shouldn't."}, {"start": 132.56, "end": 137.44, "text": " We can specify that ourselves by using a brush to highlight the region that we wish to"}, {"start": 137.44, "end": 144.2, "text": " see animated, and we also have a little more artistic control over the results by prescribing"}, {"start": 144.2, "end": 147.76, "text": " a direction in which things should go."}, {"start": 147.76, "end": 153.24, "text": " And it appears to work on a really wide variety of images, which is only one of its most appealing"}, {"start": 153.24, "end": 155.48000000000002, "text": " features."}, {"start": 155.48000000000002, "end": 157.76, "text": " Here are some of my favorite results."}, {"start": 157.76, "end": 163.12, "text": " I particularly love the one with the upper rotation here."}, {"start": 163.12, "end": 164.88, "text": " Very impressive."}, {"start": 164.88, "end": 170.48, "text": " Now, let's compare it to the previous method from just one year ago, and let's see what"}, {"start": 170.48, "end": 171.95999999999998, "text": " the numbers say."}, {"start": 171.95999999999998, "end": 178.68, "text": " Well, they say that the previous method performs better on fluid elements than the new one."}, {"start": 178.68, "end": 186.28, "text": " My experience is that it indeed works better on specialized cases, like this fire texture,"}, {"start": 186.28, "end": 191.84, "text": " but on many water images, they perform roughly equivalently."}, {"start": 191.84, "end": 194.16, "text": " Both are doing quite well."}, {"start": 194.16, "end": 197.24, "text": " So is the new one really better?"}, {"start": 197.24, "end": 200.44, "text": " Well, here comes the interesting part."}, {"start": 200.44, "end": 206.36, "text": " When presented with a diverse set of images, look, there is no contest here."}, {"start": 206.36, "end": 213.16, "text": " The previous one creates no results, incorrect results, or if it does something, the new"}, {"start": 213.16, "end": 217.56, "text": " technique almost always comes out way better."}, {"start": 217.56, "end": 222.8, "text": " Not only that, but let's see what the execution time looks like for the new method."}, {"start": 222.8, "end": 225.56, "text": " How much do we have to wait for these results?"}, {"start": 225.56, "end": 232.0, "text": " The one from last year took 20 seconds per image and required a big honking graphics card,"}, {"start": 232.0, "end": 238.51999999999998, "text": " while the new one only needs your smartphone and runs in\u2026 what?"}, {"start": 238.51999999999998, "end": 240.6, "text": " Just one second."}, {"start": 240.6, "end": 242.2, "text": " Loving it."}, {"start": 242.2, "end": 245.48, "text": " So what images would you try with this one?"}, {"start": 245.48, "end": 247.16, "text": " Let me know in the comments below."}, {"start": 247.16, "end": 252.2, "text": " Well, in fact, you don't need to just think about what you would try, because you can"}, {"start": 252.2, "end": 253.83999999999997, "text": " try this yourself."}, {"start": 253.83999999999997, "end": 256.32, "text": " The link is available in the video description."}, {"start": 256.32, "end": 260.56, "text": " Make sure to let me know in the comments below if you had some success with it."}, {"start": 260.56, "end": 263.36, "text": " And here comes the even more interesting part."}, {"start": 263.36, "end": 269.24, "text": " The previous method was using a learning based algorithm while this one is a bona fide"}, {"start": 269.24, "end": 272.40000000000003, "text": " almost completely handcrafted technique."}, {"start": 272.40000000000003, "end": 277.8, "text": " Partly, because training neural networks requires a great deal of training data, and there"}, {"start": 277.8, "end": 285.64, "text": " are very few, if any, training examples for moving buildings and these other surreal phenomena."}, {"start": 285.64, "end": 287.16, "text": " Ingenious."}, {"start": 287.16, "end": 290.32, "text": " Huge congratulations to the authors for pulling this off."}, {"start": 290.32, "end": 295.2, "text": " Now, of course, not even this technique is perfect, there are still cases where it does"}, {"start": 295.2, "end": 297.32, "text": " not create appealing results."}, {"start": 297.32, "end": 303.8, "text": " However, since it only takes a second to compute, we can easily retry with a different pixel"}, {"start": 303.8, "end": 308.44, "text": " mask or direction and see if it does better."}, {"start": 308.44, "end": 313.76, "text": " And just imagine what we will be able to do two more papers down the line."}, {"start": 313.76, "end": 315.6, "text": " What a time to be alive."}, {"start": 315.6, "end": 318.84, "text": " This video has been supported by weights and biases."}, {"start": 318.84, "end": 324.08, "text": " They have an amazing podcast by the name Gradient Descent, where they interview machine learning"}, {"start": 324.08, "end": 330.08, "text": " experts who discuss how they use learning based algorithms to solve real world problems."}, {"start": 330.08, "end": 336.91999999999996, "text": " They have discussed biology, teaching robots, machine learning in outer space and a whole lot"}, {"start": 336.91999999999996, "end": 337.91999999999996, "text": " more."}, {"start": 337.91999999999996, "end": 340.91999999999996, "text": " Perfect for a fellow scholar with an open mind."}, {"start": 340.91999999999996, "end": 348.32, "text": " Make sure to visit them through wmb.me slash gd or just click the link in the video description."}, {"start": 348.32, "end": 353.15999999999997, "text": " Our thanks to weights and biases for their longstanding support and for helping us make"}, {"start": 353.16, "end": 354.64000000000004, "text": " better videos for you."}, {"start": 354.64, "end": 384.2, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Nz-X3cCeXVE
This AI Helps Testing The Games Of The Future! 🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://colab.research.google.com/drive/1gKixa6hNUB8qrn1CfHirOfTEQm0qLCSS 📝 The paper "Improving Playtesting Coverage via Curiosity Driven Reinforcement Learning Agents" is available here: https://www.ea.com/seed/news/cog2021-curiosity-driven-rl-agents 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Or join us here: https://www.youtube.com/user/keeroyz/join Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is two minute papers with Dr. Carlos Jona Ifehir. Have you ever got stuck in a video game? Or found a glitch that would prevent you from finishing it? As many of you know, most well-known computer games undergo a ton of playtesting, an important step that is supposed to unveil these issues. So, how is it possible that all these bugs and glitches still make it to the final product? Why did the creators not find these issues? Well, you see, playtesting is often done by humans. That sounds like a good thing, and it often is, but here comes the problem. Whenever we change something in the game, our changes may also have unintended consequences somewhere else away from where we applied them. New oversights may appear elsewhere, for instance, moving a platform may make the level more playable, however, also, and this might happen. The player may now be able to enter a part of the level that shouldn't be accessible, or be more likely to encounter a collision bug and get stuck. Unfortunately, all this means that it's not enough to just test what we have changed, but we have to retest the whole level, or maybe the whole game itself. For every single change, no matter how small. That not only takes a ton of time and effort, but is often flat out impractical. So, what is the solution? Apparently, a proper solution would require asking tons of curious humans to test the game. But wait a second, we already have curious learning-based algorithms. Can we use them for playtesting? That sounds amazing. Well, yes, until we try it. You see, here is an automated agent, but a naive one trying to explore the level. Unfortunately, it seems to have missed half the map. Well, that's not the rigorous testing we are looking for, is it? Let's see what this new AI offers. Can it do any better? Oh, my, now we're talking. The new technique was able to explore not less than 50%, but a whopping 95% of the map. Excellent. But we are experienced fellow scholars over here. So of course, we have some questions. Apparently, this one has great coverage, so it can cruise around, great. But our question is, can these AI agents really find game-breaking issues? Well, look. We just found a bug where it could climb to the top of the platform without having to use the elevator. It can also build a graph that describes which parts of the level are accessible and through what path. Look, this visualization tells us about the earlier issue where one could climb the wall through an unintentional issue and after the level designer supposedly fixed it by adjusting the steepness of the wall. Let's see the new path. Yes. Now, it could only get up there by using the elevator. That is the intended way to traverse the level. Excellent. And it gets better. It can even tell us the trajectories that enabled it to leave the map so we know exactly what issues we need to fix without having to look through hours and hours of video footage. But whenever we apply the fixes, we can easily unleash another bunch of these AI's to search every new cancanny and try these crazy strategies even once that don't make any sense but appear to work well. So, how long does this take? Well, the new method can explore half the map in approximately an hour or two, can explore 90% of the map in about 28 hours and if we give it a couple more days, it goes up to about 95%. That is quite a bit so we don't get immediate feedback as soon as we change something since this method is geared towards curiosity and not efficiency. Note that this is just the first crack at the problem and I would not be surprised if just one more paper down the line, this would take about an hour and two more papers down the line, it might even be done in a matter of minutes. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use sweeps to automate hyper-parameter optimization and explore the space of possible models and find the best one. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub and more. And get this, weights and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.2, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Carlos Jona Ifehir."}, {"start": 5.2, "end": 7.84, "text": " Have you ever got stuck in a video game?"}, {"start": 7.84, "end": 11.36, "text": " Or found a glitch that would prevent you from finishing it?"}, {"start": 11.36, "end": 17.6, "text": " As many of you know, most well-known computer games undergo a ton of playtesting, an important"}, {"start": 17.6, "end": 20.64, "text": " step that is supposed to unveil these issues."}, {"start": 20.64, "end": 27.8, "text": " So, how is it possible that all these bugs and glitches still make it to the final product?"}, {"start": 27.8, "end": 30.880000000000003, "text": " Why did the creators not find these issues?"}, {"start": 30.880000000000003, "end": 35.6, "text": " Well, you see, playtesting is often done by humans."}, {"start": 35.6, "end": 41.44, "text": " That sounds like a good thing, and it often is, but here comes the problem."}, {"start": 41.44, "end": 47.08, "text": " Whenever we change something in the game, our changes may also have unintended consequences"}, {"start": 47.08, "end": 50.72, "text": " somewhere else away from where we applied them."}, {"start": 50.72, "end": 56.16, "text": " New oversights may appear elsewhere, for instance, moving a platform may make the level"}, {"start": 56.16, "end": 61.36, "text": " more playable, however, also, and this might happen."}, {"start": 61.36, "end": 67.24, "text": " The player may now be able to enter a part of the level that shouldn't be accessible,"}, {"start": 67.24, "end": 72.47999999999999, "text": " or be more likely to encounter a collision bug and get stuck."}, {"start": 72.47999999999999, "end": 77.88, "text": " Unfortunately, all this means that it's not enough to just test what we have changed,"}, {"start": 77.88, "end": 84.28, "text": " but we have to retest the whole level, or maybe the whole game itself."}, {"start": 84.28, "end": 88.2, "text": " For every single change, no matter how small."}, {"start": 88.2, "end": 93.84, "text": " That not only takes a ton of time and effort, but is often flat out impractical."}, {"start": 93.84, "end": 97.16, "text": " So, what is the solution?"}, {"start": 97.16, "end": 104.08, "text": " Apparently, a proper solution would require asking tons of curious humans to test the game."}, {"start": 104.08, "end": 109.72, "text": " But wait a second, we already have curious learning-based algorithms."}, {"start": 109.72, "end": 112.36, "text": " Can we use them for playtesting?"}, {"start": 112.36, "end": 114.64, "text": " That sounds amazing."}, {"start": 114.64, "end": 117.16, "text": " Well, yes, until we try it."}, {"start": 117.16, "end": 124.08, "text": " You see, here is an automated agent, but a naive one trying to explore the level."}, {"start": 124.08, "end": 127.68, "text": " Unfortunately, it seems to have missed half the map."}, {"start": 127.68, "end": 131.68, "text": " Well, that's not the rigorous testing we are looking for, is it?"}, {"start": 131.68, "end": 134.28, "text": " Let's see what this new AI offers."}, {"start": 134.28, "end": 136.72, "text": " Can it do any better?"}, {"start": 136.72, "end": 140.68, "text": " Oh, my, now we're talking."}, {"start": 140.68, "end": 149.0, "text": " The new technique was able to explore not less than 50%, but a whopping 95% of the map."}, {"start": 149.0, "end": 150.32, "text": " Excellent."}, {"start": 150.32, "end": 153.64000000000001, "text": " But we are experienced fellow scholars over here."}, {"start": 153.64000000000001, "end": 156.56, "text": " So of course, we have some questions."}, {"start": 156.56, "end": 162.16, "text": " Apparently, this one has great coverage, so it can cruise around, great."}, {"start": 162.16, "end": 167.52, "text": " But our question is, can these AI agents really find game-breaking issues?"}, {"start": 167.52, "end": 169.24, "text": " Well, look."}, {"start": 169.24, "end": 174.08, "text": " We just found a bug where it could climb to the top of the platform without having to"}, {"start": 174.08, "end": 177.48000000000002, "text": " use the elevator."}, {"start": 177.48000000000002, "end": 183.68, "text": " It can also build a graph that describes which parts of the level are accessible and through"}, {"start": 183.68, "end": 185.24, "text": " what path."}, {"start": 185.24, "end": 191.60000000000002, "text": " Look, this visualization tells us about the earlier issue where one could climb the wall"}, {"start": 191.60000000000002, "end": 198.04000000000002, "text": " through an unintentional issue and after the level designer supposedly fixed it by adjusting"}, {"start": 198.04, "end": 200.44, "text": " the steepness of the wall."}, {"start": 200.44, "end": 204.32, "text": " Let's see the new path."}, {"start": 204.32, "end": 205.32, "text": " Yes."}, {"start": 205.32, "end": 209.35999999999999, "text": " Now, it could only get up there by using the elevator."}, {"start": 209.35999999999999, "end": 212.51999999999998, "text": " That is the intended way to traverse the level."}, {"start": 212.51999999999998, "end": 214.23999999999998, "text": " Excellent."}, {"start": 214.23999999999998, "end": 215.72, "text": " And it gets better."}, {"start": 215.72, "end": 221.35999999999999, "text": " It can even tell us the trajectories that enabled it to leave the map so we know exactly"}, {"start": 221.35999999999999, "end": 227.28, "text": " what issues we need to fix without having to look through hours and hours of video footage."}, {"start": 227.28, "end": 232.56, "text": " But whenever we apply the fixes, we can easily unleash another bunch of these AI's to"}, {"start": 232.56, "end": 238.84, "text": " search every new cancanny and try these crazy strategies even once that don't make any"}, {"start": 238.84, "end": 242.2, "text": " sense but appear to work well."}, {"start": 242.2, "end": 245.24, "text": " So, how long does this take?"}, {"start": 245.24, "end": 252.24, "text": " Well, the new method can explore half the map in approximately an hour or two, can explore"}, {"start": 252.24, "end": 260.0, "text": " 90% of the map in about 28 hours and if we give it a couple more days, it goes up to about"}, {"start": 260.0, "end": 262.04, "text": " 95%."}, {"start": 262.04, "end": 267.0, "text": " That is quite a bit so we don't get immediate feedback as soon as we change something"}, {"start": 267.0, "end": 271.84000000000003, "text": " since this method is geared towards curiosity and not efficiency."}, {"start": 271.84000000000003, "end": 276.72, "text": " Note that this is just the first crack at the problem and I would not be surprised if"}, {"start": 276.72, "end": 282.12, "text": " just one more paper down the line, this would take about an hour and two more papers"}, {"start": 282.12, "end": 286.88, "text": " down the line, it might even be done in a matter of minutes."}, {"start": 286.88, "end": 288.72, "text": " What a time to be alive!"}, {"start": 288.72, "end": 291.88, "text": " This episode has been supported by weights and biases."}, {"start": 291.88, "end": 297.72, "text": " In this post, they show you how to use sweeps to automate hyper-parameter optimization"}, {"start": 297.72, "end": 302.56, "text": " and explore the space of possible models and find the best one."}, {"start": 302.56, "end": 307.68, "text": " During my PhD studies, I trained a ton of neural networks which were used in our experiments."}, {"start": 307.68, "end": 312.92, "text": " However, over time, there was just too much data in our repositories and what I am looking"}, {"start": 312.92, "end": 316.36, "text": " for is not data, but insight."}, {"start": 316.36, "end": 321.12, "text": " And that's exactly how weights and biases helps you by organizing your experiments."}, {"start": 321.12, "end": 326.76, "text": " It is used by more than 200 companies and research institutions, including OpenAI, Toyota"}, {"start": 326.76, "end": 329.48, "text": " Research, GitHub and more."}, {"start": 329.48, "end": 335.32, "text": " And get this, weights and biases is free for all individuals, academics and open source"}, {"start": 335.32, "end": 336.32, "text": " projects."}, {"start": 336.32, "end": 342.04, "text": " Make sure to visit them through wnba.com slash papers or just click the link in the video"}, {"start": 342.04, "end": 345.32, "text": " description and you can get a free demo today."}, {"start": 345.32, "end": 349.92, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 349.92, "end": 351.36, "text": " better videos for you."}, {"start": 351.36, "end": 380.92, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=jl0XCslxwB0
NVIDIA’s Minecraft AI: Feels Like Magic! 🌴 …Also, 1 Million Subs! 🥳
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Unsupervised 3D Neural Rendering of Minecraft Worlds" is available here: https://nvlabs.github.io/GANcraft/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #minecraft #gancraft
Dear Fellow Scholars, this is two-minute papers with Dr. Károly Zsolnai-Fehér. We just hit a million subscribers. I can hardly believe that so many of you fellow scholars are enjoying the papers. Thank you so much for all the love. In the previous episode, we explored an absolutely insane idea. The idea was to unleash a learning algorithm on a dataset that contains images and videos of cities. Then take a piece of video footage from a game and translate it into a real movie. It is an absolute miracle that it works, and it not only works, but it works reliably and interactively. It also works much better than its predecessors. Now we discussed that the input video game footage is pretty detailed. And I was wondering, what if we don't create the entire game in such detail? What about creating just the bare minimum, a draft of the game, if you will, and let the algorithm do the heavy lifting? Let's call this World to World Translation. So is World to World Translation possible or is this science fiction? Fortunately, scientists at Nvidia and Cornell University thought of that problem and came up with a remarkable solution. But the first question is, what form should this draft take? And they say it should be a Minecraft world or in other words, a landscape assembled from little blocks. Yes, there is simple enough indeed. So this goes in and now let's see what comes out. Oh my, it created water, it understands the concept of an island, and it created a beautiful landscape also with vegetation. Insanity. It even seems to have some concept of reflections, although they will need some extra work to get perfectly right. But what about artistic control? Do we get this one solution or can we give more detailed instructions to the technique? Yes, we can. Look at that. Since the training data contains desert and snowy landscapes too, it also supports them as outputs. Whoa, this is getting wild. I like it. And it even supports interpolation, which means that we can create one landscape and ask the AI to create a blend between different styles. We just look at the output animations and pick the one that we like best. Absolutely amazing. What I also really liked is that it also supports rendering fog. But this is not some trivial fog technique. No, no, look at how beautifully it occludes the trees. If we look under the hood. Oh my, I am a light transport researcher by trade and boy, am I happy to see the authors having done their homework. Look, we are not playing games here. The technique contains bona fide volumetric light transmission calculations. Now, this is not the first technique to perform this kind of word-to-word translation. What about the competition? As you see, there are many prior techniques here, but there is one key issue that almost all of them share. So, what is that? Oh yes, much like with the other video game papers, the issue is the lack of temporal coherence, which means that the previous techniques don't remember what they did a few images earlier and may create a drastically different series of images. And the result is this kind of flickering that is often a deal breaker, regardless of how good the technique is otherwise. Look, the new method does this significantly better. This could help level generation for computer games, creating all kinds of simulations, and if it improves some more, these could maybe even become back drops to be used in animated movies. Now, of course, this is still not perfect, some of the outputs are still blocky. But, with this method, creating virtual worlds has never been easier. I cannot believe that we can have a learning-based algorithm where the input is one draft world and it transforms it to a much more detailed and beautiful one. Yes, it has its limitations, but just imagine what we will be able to do two more papers down the line. Especially given that the quality of the results can be algorithmically measured, which is a gutsynt for comparing this to future methods. And for now, huge congratulations to Nvidia and Cornell University for this amazing paper. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.8, "end": 7.8, "text": " We just hit a million subscribers."}, {"start": 7.8, "end": 12.8, "text": " I can hardly believe that so many of you fellow scholars are enjoying the papers."}, {"start": 12.8, "end": 15.6, "text": " Thank you so much for all the love."}, {"start": 15.6, "end": 20.2, "text": " In the previous episode, we explored an absolutely insane idea."}, {"start": 20.2, "end": 27.8, "text": " The idea was to unleash a learning algorithm on a dataset that contains images and videos of cities."}, {"start": 27.8, "end": 34.6, "text": " Then take a piece of video footage from a game and translate it into a real movie."}, {"start": 34.6, "end": 44.0, "text": " It is an absolute miracle that it works, and it not only works, but it works reliably and interactively."}, {"start": 44.0, "end": 47.6, "text": " It also works much better than its predecessors."}, {"start": 47.6, "end": 52.2, "text": " Now we discussed that the input video game footage is pretty detailed."}, {"start": 52.2, "end": 58.400000000000006, "text": " And I was wondering, what if we don't create the entire game in such detail?"}, {"start": 58.400000000000006, "end": 63.400000000000006, "text": " What about creating just the bare minimum, a draft of the game, if you will,"}, {"start": 63.400000000000006, "end": 66.60000000000001, "text": " and let the algorithm do the heavy lifting?"}, {"start": 66.60000000000001, "end": 70.0, "text": " Let's call this World to World Translation."}, {"start": 70.0, "end": 75.80000000000001, "text": " So is World to World Translation possible or is this science fiction?"}, {"start": 75.80000000000001, "end": 81.0, "text": " Fortunately, scientists at Nvidia and Cornell University thought of that problem"}, {"start": 81.0, "end": 83.8, "text": " and came up with a remarkable solution."}, {"start": 83.8, "end": 88.4, "text": " But the first question is, what form should this draft take?"}, {"start": 88.4, "end": 96.2, "text": " And they say it should be a Minecraft world or in other words, a landscape assembled from little blocks."}, {"start": 96.2, "end": 98.8, "text": " Yes, there is simple enough indeed."}, {"start": 98.8, "end": 104.4, "text": " So this goes in and now let's see what comes out."}, {"start": 104.4, "end": 116.4, "text": " Oh my, it created water, it understands the concept of an island, and it created a beautiful landscape also with vegetation."}, {"start": 116.4, "end": 118.4, "text": " Insanity."}, {"start": 118.4, "end": 126.4, "text": " It even seems to have some concept of reflections, although they will need some extra work to get perfectly right."}, {"start": 126.4, "end": 129.4, "text": " But what about artistic control?"}, {"start": 129.4, "end": 135.4, "text": " Do we get this one solution or can we give more detailed instructions to the technique?"}, {"start": 135.4, "end": 137.4, "text": " Yes, we can."}, {"start": 137.4, "end": 138.4, "text": " Look at that."}, {"start": 138.4, "end": 145.4, "text": " Since the training data contains desert and snowy landscapes too, it also supports them as outputs."}, {"start": 145.4, "end": 148.4, "text": " Whoa, this is getting wild."}, {"start": 148.4, "end": 150.4, "text": " I like it."}, {"start": 150.4, "end": 155.4, "text": " And it even supports interpolation, which means that we can create one landscape"}, {"start": 155.4, "end": 160.4, "text": " and ask the AI to create a blend between different styles."}, {"start": 160.4, "end": 165.4, "text": " We just look at the output animations and pick the one that we like best."}, {"start": 165.4, "end": 167.4, "text": " Absolutely amazing."}, {"start": 167.4, "end": 171.4, "text": " What I also really liked is that it also supports rendering fog."}, {"start": 171.4, "end": 175.4, "text": " But this is not some trivial fog technique."}, {"start": 175.4, "end": 179.4, "text": " No, no, look at how beautifully it occludes the trees."}, {"start": 179.4, "end": 181.4, "text": " If we look under the hood."}, {"start": 181.4, "end": 190.4, "text": " Oh my, I am a light transport researcher by trade and boy, am I happy to see the authors having done their homework."}, {"start": 190.4, "end": 192.4, "text": " Look, we are not playing games here."}, {"start": 192.4, "end": 197.4, "text": " The technique contains bona fide volumetric light transmission calculations."}, {"start": 197.4, "end": 202.4, "text": " Now, this is not the first technique to perform this kind of word-to-word translation."}, {"start": 202.4, "end": 204.4, "text": " What about the competition?"}, {"start": 204.4, "end": 212.4, "text": " As you see, there are many prior techniques here, but there is one key issue that almost all of them share."}, {"start": 212.4, "end": 214.4, "text": " So, what is that?"}, {"start": 214.4, "end": 220.4, "text": " Oh yes, much like with the other video game papers, the issue is the lack of temporal coherence,"}, {"start": 220.4, "end": 226.4, "text": " which means that the previous techniques don't remember what they did a few images earlier"}, {"start": 226.4, "end": 230.4, "text": " and may create a drastically different series of images."}, {"start": 230.4, "end": 238.4, "text": " And the result is this kind of flickering that is often a deal breaker, regardless of how good the technique is otherwise."}, {"start": 238.4, "end": 242.4, "text": " Look, the new method does this significantly better."}, {"start": 242.4, "end": 248.4, "text": " This could help level generation for computer games, creating all kinds of simulations,"}, {"start": 248.4, "end": 256.4, "text": " and if it improves some more, these could maybe even become back drops to be used in animated movies."}, {"start": 256.4, "end": 261.4, "text": " Now, of course, this is still not perfect, some of the outputs are still blocky."}, {"start": 261.4, "end": 266.4, "text": " But, with this method, creating virtual worlds has never been easier."}, {"start": 266.4, "end": 273.4, "text": " I cannot believe that we can have a learning-based algorithm where the input is one draft world"}, {"start": 273.4, "end": 278.4, "text": " and it transforms it to a much more detailed and beautiful one."}, {"start": 278.4, "end": 285.4, "text": " Yes, it has its limitations, but just imagine what we will be able to do two more papers down the line."}, {"start": 285.4, "end": 290.4, "text": " Especially given that the quality of the results can be algorithmically measured,"}, {"start": 290.4, "end": 294.4, "text": " which is a gutsynt for comparing this to future methods."}, {"start": 294.4, "end": 300.4, "text": " And for now, huge congratulations to Nvidia and Cornell University for this amazing paper."}, {"start": 300.4, "end": 304.4, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 304.4, "end": 310.4, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 310.4, "end": 317.4, "text": " They recently launched Quadro RTX 6000, RTX 8000, and V100 instances."}, {"start": 317.4, "end": 325.4, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 325.4, "end": 330.4, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 330.4, "end": 338.4, "text": " Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations or servers."}, {"start": 338.4, "end": 345.4, "text": " Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 345.4, "end": 350.4, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 350.4, "end": 378.4, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Lp4k4O_HEeQ
One Simulation Paper, Tons of Progress! 💇
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Revisiting Integration in the Material Point Method" is available here: http://yunfei.work/asflip/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jone Fahir, through the power of computer graphics research, today we can write wondrous programs which can simulate all kinds of interactions in virtual worlds. These works are some of my favorites, and looking at the results one would think that these algorithms are so advanced that it's hardly anything new to invent in this area. But as amazing as these techniques are, they don't come without limitations. Alright, well, what do those limitations look like? Let's have a look at this example. The water is coming out of the nozzle, and it behaves unnaturally. But that's only the smaller problem, there is an even bigger one. What is that problem? Let's slow this down, and look carefully. Oh, where did the water go? Yes, this is the classical numerical dissipation problem in fluid simulations, where due to an averaging step particles disappear into thin air. And now, hold on to your papers and let's see if this new method can properly address this problem. And, oh yes, fantastic. So it dissipates less. Great, what else does this do? Let's have a look through this experiment, where a wheel gets submerged into sand, that's good, but the simulation is a little mellow. You see, the sand particles are flowing down like a fluid, and the wheel does not really roll up the particles in the air. And the new one. So apparently, it not only helps with numerical dissipation, but also with particle separation too. More value. I like it. If this technique can really solve these two phenomena, we don't even need sandy tires and water sprinklers to make it shine. There are so many scenarios where it performs better than previous techniques. For instance, when simulating this non-frictional elastic plate with previous methods, some of the particles get glued to it. And did you catch the other issue? Yes, the rest of the particles also refuse to slide off of each other. And now, let's see the new method. Oh my, it can simulate this phenomena correctly too. And it does not stop there, it also simulates strand strand interactions better than previous methods. In these cases, sometimes the collision of short strands with boundaries was also simulated incorrectly. Look at how all this geometry intersected through the brush. And the new method? Yes, of course, it addresses these issues too. So if it can simulate the movement and intersection of short strands better, does that mean that it can also perform higher quality hair simulations? Oh yes, yes it does. Excellent. So as you see, the pace of progress in computer graphics research is absolutely stunning. Things that were previously impossible can become possible in a matter of just one paper. What a time to be alive. Perceptileps is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptileps.com slash papers to easily install the free local version of their system today. Our thanks to perceptileps for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Jone Fahir,"}, {"start": 4.64, "end": 10.32, "text": " through the power of computer graphics research, today we can write wondrous programs which can"}, {"start": 10.32, "end": 16.8, "text": " simulate all kinds of interactions in virtual worlds. These works are some of my favorites,"}, {"start": 16.8, "end": 22.88, "text": " and looking at the results one would think that these algorithms are so advanced that it's hardly"}, {"start": 22.88, "end": 29.44, "text": " anything new to invent in this area. But as amazing as these techniques are, they don't come"}, {"start": 29.44, "end": 36.160000000000004, "text": " without limitations. Alright, well, what do those limitations look like? Let's have a look at"}, {"start": 36.160000000000004, "end": 44.56, "text": " this example. The water is coming out of the nozzle, and it behaves unnaturally. But that's only"}, {"start": 44.56, "end": 51.6, "text": " the smaller problem, there is an even bigger one. What is that problem? Let's slow this down,"}, {"start": 51.6, "end": 61.6, "text": " and look carefully. Oh, where did the water go? Yes, this is the classical numerical dissipation"}, {"start": 61.6, "end": 68.24000000000001, "text": " problem in fluid simulations, where due to an averaging step particles disappear into thin air."}, {"start": 68.96000000000001, "end": 75.52000000000001, "text": " And now, hold on to your papers and let's see if this new method can properly address this problem."}, {"start": 75.52, "end": 87.28, "text": " And, oh yes, fantastic. So it dissipates less. Great, what else does this do? Let's have a look"}, {"start": 87.28, "end": 94.72, "text": " through this experiment, where a wheel gets submerged into sand, that's good, but the simulation is a"}, {"start": 94.72, "end": 102.24, "text": " little mellow. You see, the sand particles are flowing down like a fluid, and the wheel does not"}, {"start": 102.24, "end": 113.6, "text": " really roll up the particles in the air. And the new one. So apparently, it not only helps with"}, {"start": 113.6, "end": 121.52, "text": " numerical dissipation, but also with particle separation too. More value. I like it. If this"}, {"start": 121.52, "end": 127.52, "text": " technique can really solve these two phenomena, we don't even need sandy tires and water sprinklers"}, {"start": 127.52, "end": 133.44, "text": " to make it shine. There are so many scenarios where it performs better than previous techniques."}, {"start": 134.16, "end": 139.6, "text": " For instance, when simulating this non-frictional elastic plate with previous methods,"}, {"start": 139.6, "end": 147.92, "text": " some of the particles get glued to it. And did you catch the other issue? Yes, the rest of the"}, {"start": 147.92, "end": 154.32, "text": " particles also refuse to slide off of each other. And now, let's see the new method."}, {"start": 154.32, "end": 164.48, "text": " Oh my, it can simulate this phenomena correctly too. And it does not stop there, it also simulates"}, {"start": 164.48, "end": 171.35999999999999, "text": " strand strand interactions better than previous methods. In these cases, sometimes the collision"}, {"start": 171.35999999999999, "end": 179.12, "text": " of short strands with boundaries was also simulated incorrectly. Look at how all this geometry intersected"}, {"start": 179.12, "end": 187.04, "text": " through the brush. And the new method? Yes, of course, it addresses these issues too."}, {"start": 188.48000000000002, "end": 194.32, "text": " So if it can simulate the movement and intersection of short strands better,"}, {"start": 194.32, "end": 202.0, "text": " does that mean that it can also perform higher quality hair simulations? Oh yes, yes it does."}, {"start": 202.0, "end": 209.68, "text": " Excellent. So as you see, the pace of progress in computer graphics research is absolutely stunning."}, {"start": 210.4, "end": 216.4, "text": " Things that were previously impossible can become possible in a matter of just one paper."}, {"start": 217.2, "end": 222.16, "text": " What a time to be alive. Perceptileps is a visual API for TensorFlow,"}, {"start": 222.16, "end": 228.24, "text": " carefully designed to make machine learning as intuitive as possible. This gives you a faster way"}, {"start": 228.24, "end": 234.24, "text": " to build out models with more transparency into how your model is architected, how it performs,"}, {"start": 234.24, "end": 240.56, "text": " and how to debug it. Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 240.56, "end": 245.84, "text": " It even generates visualizations for all the model variables and gives you recommendations"}, {"start": 245.84, "end": 252.72, "text": " both during modeling and training and does all this automatically. I only wish I had a tool like this"}, {"start": 252.72, "end": 259.2, "text": " when I was working on my neural networks during my PhD years. Visit perceptileps.com slash papers"}, {"start": 259.2, "end": 264.72, "text": " to easily install the free local version of their system today. Our thanks to perceptileps for"}, {"start": 264.72, "end": 269.52, "text": " their support and for helping us make better videos for you. Thanks for watching and for your"}, {"start": 269.52, "end": 286.32, "text": " generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=7WgtK1C4hQg
Simulating The Olympics… On Mars! 🌗
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Discovering Diverse Athletic Jumping Strategies" is available here: https://arpspoof.github.io/project/jump/jump.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, it is possible to teach virtual characters to perform highly dynamic motions like a cartwheel or backflips. And not only that, but we can teach an AI to perform this differently from other characters to do it with style, if you will. But, today we are not looking to be stylish, today we are looking to be efficient. In this paper, researchers placed an AI in a physics simulation and asked it to control a virtual character and gave it one task to jump as high as it can. And when I heard this idea, I was elated and immediately wondered, did it come up with popular techniques that exist in the real world? Well, let's see... Yes! Oh, that is indeed a force-bury flop. This allows the athlete to jump backwards over the bar, thus lowering their center of gravity. Even today, this is the prevalent technique in high-jump competitions. With this technique, the take-off takes place relatively late. The only problem is that the AI didn't clear the bar so far, so can it. Well, this is a learning-based algorithm, so with a little more practice it should improve... Yes! Great work! If we lowered the bar just a tiny bit for this virtual athlete, we can also observe it performing the Western role. With this technique, we take off a little earlier and we don't jump backward, but sideways. If it had nothing else, this would already be a great paper, but we are not nearly done yet. The best is yet to come. This is a simulation, a virtual world, if you will, and here we make all the rules. So, the limit is only our imagination. The authors know that very well and you will see that they indeed have a very vivid imagination. For instance, we can also simulate a jump with a weak take-off flag and see that with this condition, the little AI can only clear a bar that is approximately one foot lower than its previous record. What about another virtual athlete with an inflexible spine? It can jump approximately two feet lower. Here's the difference compared to the original. I am enjoying this a great deal and it's only getting better. Next, what happens if we are injured and have a cast on the take-off knee? What results can we expect? Something like this. We can jump a little more than two feet lower. What about organizing the Olympics on Mars? What would that look like? What would the world record be with the weaker gravity there? Well, hold on to your papers and look. Yes, we could jump three feet higher than on Earth and then......out. Well, miss the phone-maring, but otherwise very impressive. And if we are already there, why limit the simulation to high jumps? Why not try something else? Again, in a simulation, we can do anything. Previously, the task was to jump over the bar, but we can also recreate the simulation to include, instead, jumping through obstacles. To get all of these magical results, the authors propose a step they call Bayesian Diversity Search. This helps systematically creating a reach selection of novel strategies and it does this efficiently. The authors also went the extra mile and included a comparison to motion capture footage performed by a real athlete. But note that the AAS version uses a similar technique and is able to clear a significantly higher bar without ever seeing a hijab move. The method was trained on motion capture footage to get used to human-like movements like walking, running, and kicks, but it has never seen any hijab techniques before. Wow! So, if this can invent hijumping techniques that took decades for humans to invent, I wonder what else could it invent? What do you think? Let me know in the comments below. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 13.6, "text": " Today, it is possible to teach virtual characters to perform highly dynamic motions like a cartwheel or backflips."}, {"start": 13.6, "end": 22.8, "text": " And not only that, but we can teach an AI to perform this differently from other characters to do it with style, if you will."}, {"start": 22.8, "end": 28.8, "text": " But, today we are not looking to be stylish, today we are looking to be efficient."}, {"start": 28.8, "end": 40.8, "text": " In this paper, researchers placed an AI in a physics simulation and asked it to control a virtual character and gave it one task to jump as high as it can."}, {"start": 40.8, "end": 50.8, "text": " And when I heard this idea, I was elated and immediately wondered, did it come up with popular techniques that exist in the real world?"}, {"start": 50.8, "end": 52.8, "text": " Well, let's see..."}, {"start": 52.8, "end": 54.8, "text": " Yes!"}, {"start": 54.8, "end": 64.8, "text": " Oh, that is indeed a force-bury flop. This allows the athlete to jump backwards over the bar, thus lowering their center of gravity."}, {"start": 64.8, "end": 68.8, "text": " Even today, this is the prevalent technique in high-jump competitions."}, {"start": 68.8, "end": 73.8, "text": " With this technique, the take-off takes place relatively late."}, {"start": 73.8, "end": 79.8, "text": " The only problem is that the AI didn't clear the bar so far, so can it."}, {"start": 79.8, "end": 87.8, "text": " Well, this is a learning-based algorithm, so with a little more practice it should improve..."}, {"start": 87.8, "end": 91.8, "text": " Yes! Great work!"}, {"start": 91.8, "end": 99.8, "text": " If we lowered the bar just a tiny bit for this virtual athlete, we can also observe it performing the Western role."}, {"start": 99.8, "end": 106.8, "text": " With this technique, we take off a little earlier and we don't jump backward, but sideways."}, {"start": 106.8, "end": 115.8, "text": " If it had nothing else, this would already be a great paper, but we are not nearly done yet. The best is yet to come."}, {"start": 115.8, "end": 121.8, "text": " This is a simulation, a virtual world, if you will, and here we make all the rules."}, {"start": 121.8, "end": 125.8, "text": " So, the limit is only our imagination."}, {"start": 125.8, "end": 131.8, "text": " The authors know that very well and you will see that they indeed have a very vivid imagination."}, {"start": 131.8, "end": 145.8, "text": " For instance, we can also simulate a jump with a weak take-off flag and see that with this condition, the little AI can only clear a bar that is approximately one foot lower than its previous record."}, {"start": 145.8, "end": 152.8, "text": " What about another virtual athlete with an inflexible spine?"}, {"start": 152.8, "end": 157.8, "text": " It can jump approximately two feet lower."}, {"start": 157.8, "end": 161.8, "text": " Here's the difference compared to the original."}, {"start": 161.8, "end": 166.8, "text": " I am enjoying this a great deal and it's only getting better."}, {"start": 166.8, "end": 174.8, "text": " Next, what happens if we are injured and have a cast on the take-off knee? What results can we expect?"}, {"start": 174.8, "end": 179.8, "text": " Something like this. We can jump a little more than two feet lower."}, {"start": 179.8, "end": 183.8, "text": " What about organizing the Olympics on Mars?"}, {"start": 183.8, "end": 188.8, "text": " What would that look like? What would the world record be with the weaker gravity there?"}, {"start": 188.8, "end": 193.8, "text": " Well, hold on to your papers and look."}, {"start": 193.8, "end": 199.8, "text": " Yes, we could jump three feet higher than on Earth and then..."}, {"start": 199.8, "end": 201.8, "text": "...out."}, {"start": 201.8, "end": 205.8, "text": " Well, miss the phone-maring, but otherwise very impressive."}, {"start": 205.8, "end": 210.8, "text": " And if we are already there, why limit the simulation to high jumps?"}, {"start": 210.8, "end": 216.8, "text": " Why not try something else? Again, in a simulation, we can do anything."}, {"start": 216.8, "end": 226.8, "text": " Previously, the task was to jump over the bar, but we can also recreate the simulation to include, instead, jumping through obstacles."}, {"start": 226.8, "end": 233.8, "text": " To get all of these magical results, the authors propose a step they call Bayesian Diversity Search."}, {"start": 233.8, "end": 240.8, "text": " This helps systematically creating a reach selection of novel strategies and it does this efficiently."}, {"start": 240.8, "end": 248.8, "text": " The authors also went the extra mile and included a comparison to motion capture footage performed by a real athlete."}, {"start": 248.8, "end": 259.8, "text": " But note that the AAS version uses a similar technique and is able to clear a significantly higher bar without ever seeing a hijab move."}, {"start": 259.8, "end": 270.8, "text": " The method was trained on motion capture footage to get used to human-like movements like walking, running, and kicks, but it has never seen any hijab techniques before."}, {"start": 270.8, "end": 272.8, "text": " Wow!"}, {"start": 272.8, "end": 280.8, "text": " So, if this can invent hijumping techniques that took decades for humans to invent, I wonder what else could it invent?"}, {"start": 280.8, "end": 283.8, "text": " What do you think? Let me know in the comments below."}, {"start": 283.8, "end": 285.8, "text": " What a time to be alive!"}, {"start": 285.8, "end": 295.8, "text": " This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 295.8, "end": 309.8, "text": " They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 309.8, "end": 314.8, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 314.8, "end": 322.8, "text": " Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers."}, {"start": 322.8, "end": 329.8, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 329.8, "end": 335.8, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 335.8, "end": 345.8, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=AGCH1GR7pPU
Burning Down an Entire Virtual Forest! 🌲🔥
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd  📝 The paper "Fire in Paradise: Mesoscale Simulation of Wildfires" is available here: http://computationalsciences.org/publications/haedrich-2021-wildfires.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir. In a previous episode, not so long ago, we burned down a virtual tree. This was possible through an amazing simulation paper from four years ago, where each leaf has its own individual mass and area. They burn individually, transfer heat to their surroundings, and finally, branches bend and look can eventually even break in this process. How quickly did this run? Of course, in real time. Well, that is quite a paper, so if this was so good, how does anyone improve that? Burn down another virtual tree? No, no, that would be too easy. You know what? Instead, let's set on fire an entire virtual forest. Oh yeah! Here you see a simulation of a devastating fire from a lightning strike in Yosemite National Park. The simulations this time around are typically sped up a great deal to be able to give us a better view of how it spreads, so if you see some flickering, that is the reason for that. But wait, is that really that much harder? Why not just put a bunch of trees next to each other and start the simulation? Would that work? The answer is a resounding no. Let's have a look why, and with that hold onto your papers because here comes the best part. It also simulates not only the fire, but cloud dynamics as well. Here you see how the wildfire creates lots of hot and dark smoke closer to the ground, and wait for it. Yes, there we go. Higher up, the condensation of water creates this lighter, cloudy region. Yes, this is key to the simulation, not because of the aesthetic effects, but this wildfire can indeed create a cloud type that goes by the name Flamagianitis. So, is that good or bad news? Well, both. Let's start with the good news, it often produces rainfall which helps putting out the fire. Well, that is wonderful news, so then what is so bad about it? Well, Flamagianitis clouds may also trigger a thunderstorm and thus create another huge fire. That's bad news, number one. And bad news number two, it also occludes the fire, thereby making it harder to locate and extinguish it. So, got it, add cloud dynamics to the tree fire simulator and we are done. Right? No, not even close. In a forest fire simulation, not just clouds, everything matters. For instance, we first need to take into consideration the wind intensity and direction. This can mean the difference between a manageable or a devastating forest fire. Second, it takes into consideration the density and moisture intensity of different tree types. For instance, you see that the darker trees here are burning down really slowly. Why is that? That is because these trees are denser, birches and oak trees. Third, the distribution of the trees also matter. Of course, the more area is covered by trees, the more degrees of freedom there are for the fire to spread. And fourth, fire can not only spread horizontally from 3 to 3, but vertically too. Look, when a small tree catches fire, this can happen. So, as we established from the previous paper, one tree catching fire can be simulated in real time. So, what about an entire forest? Let's take the simulation with the most number of trees. My goodness, they simulated 120k trees here. And the computation time for one simulation step was 95. So, 95 what? 95 milliseconds. Wow! So, this thing runs interactively, which means that all of these phenomena can be simulated in close to real time. With that, we can now model how a fire would spread in real forests around the world, test different kinds of fire barriers and their advantages, and we can even simulate how to effectively put out the fire. And don't forget, we went from simulating one burning tree to 120k in just one more paper down the line. What a time to be alive! This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning-based algorithms to solve real-world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me slash gd or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 10.4, "text": " In a previous episode, not so long ago, we burned down a virtual tree."}, {"start": 10.4, "end": 15.4, "text": " This was possible through an amazing simulation paper from four years ago,"}, {"start": 15.4, "end": 19.6, "text": " where each leaf has its own individual mass and area."}, {"start": 19.6, "end": 23.8, "text": " They burn individually, transfer heat to their surroundings,"}, {"start": 23.8, "end": 31.6, "text": " and finally, branches bend and look can eventually even break in this process."}, {"start": 31.6, "end": 33.8, "text": " How quickly did this run?"}, {"start": 33.8, "end": 35.9, "text": " Of course, in real time."}, {"start": 35.9, "end": 42.3, "text": " Well, that is quite a paper, so if this was so good, how does anyone improve that?"}, {"start": 42.3, "end": 44.900000000000006, "text": " Burn down another virtual tree?"}, {"start": 44.900000000000006, "end": 48.1, "text": " No, no, that would be too easy."}, {"start": 48.1, "end": 53.800000000000004, "text": " You know what? Instead, let's set on fire an entire virtual forest."}, {"start": 53.800000000000004, "end": 55.1, "text": " Oh yeah!"}, {"start": 55.1, "end": 61.5, "text": " Here you see a simulation of a devastating fire from a lightning strike in Yosemite National Park."}, {"start": 61.5, "end": 65.7, "text": " The simulations this time around are typically sped up a great deal"}, {"start": 65.7, "end": 69.2, "text": " to be able to give us a better view of how it spreads,"}, {"start": 69.2, "end": 73.2, "text": " so if you see some flickering, that is the reason for that."}, {"start": 73.2, "end": 76.8, "text": " But wait, is that really that much harder?"}, {"start": 76.8, "end": 82.0, "text": " Why not just put a bunch of trees next to each other and start the simulation?"}, {"start": 82.0, "end": 83.5, "text": " Would that work?"}, {"start": 83.5, "end": 86.2, "text": " The answer is a resounding no."}, {"start": 86.2, "end": 91.89999999999999, "text": " Let's have a look why, and with that hold onto your papers because here comes the best part."}, {"start": 91.89999999999999, "end": 97.3, "text": " It also simulates not only the fire, but cloud dynamics as well."}, {"start": 97.3, "end": 104.0, "text": " Here you see how the wildfire creates lots of hot and dark smoke closer to the ground,"}, {"start": 104.0, "end": 107.2, "text": " and wait for it."}, {"start": 107.2, "end": 109.4, "text": " Yes, there we go."}, {"start": 109.4, "end": 114.5, "text": " Higher up, the condensation of water creates this lighter, cloudy region."}, {"start": 114.5, "end": 119.3, "text": " Yes, this is key to the simulation, not because of the aesthetic effects,"}, {"start": 119.3, "end": 126.3, "text": " but this wildfire can indeed create a cloud type that goes by the name Flamagianitis."}, {"start": 126.3, "end": 129.5, "text": " So, is that good or bad news?"}, {"start": 129.5, "end": 131.3, "text": " Well, both."}, {"start": 131.3, "end": 137.5, "text": " Let's start with the good news, it often produces rainfall which helps putting out the fire."}, {"start": 137.5, "end": 142.5, "text": " Well, that is wonderful news, so then what is so bad about it?"}, {"start": 142.5, "end": 149.5, "text": " Well, Flamagianitis clouds may also trigger a thunderstorm and thus create another huge fire."}, {"start": 149.5, "end": 151.70000000000002, "text": " That's bad news, number one."}, {"start": 151.70000000000002, "end": 160.10000000000002, "text": " And bad news number two, it also occludes the fire, thereby making it harder to locate and extinguish it."}, {"start": 160.1, "end": 165.6, "text": " So, got it, add cloud dynamics to the tree fire simulator and we are done."}, {"start": 165.6, "end": 166.6, "text": " Right?"}, {"start": 166.6, "end": 168.9, "text": " No, not even close."}, {"start": 168.9, "end": 173.6, "text": " In a forest fire simulation, not just clouds, everything matters."}, {"start": 173.6, "end": 179.4, "text": " For instance, we first need to take into consideration the wind intensity and direction."}, {"start": 179.4, "end": 186.29999999999998, "text": " This can mean the difference between a manageable or a devastating forest fire."}, {"start": 186.3, "end": 193.3, "text": " Second, it takes into consideration the density and moisture intensity of different tree types."}, {"start": 193.3, "end": 199.10000000000002, "text": " For instance, you see that the darker trees here are burning down really slowly."}, {"start": 199.10000000000002, "end": 200.60000000000002, "text": " Why is that?"}, {"start": 200.60000000000002, "end": 206.70000000000002, "text": " That is because these trees are denser, birches and oak trees."}, {"start": 206.70000000000002, "end": 210.10000000000002, "text": " Third, the distribution of the trees also matter."}, {"start": 210.1, "end": 218.7, "text": " Of course, the more area is covered by trees, the more degrees of freedom there are for the fire to spread."}, {"start": 218.7, "end": 225.7, "text": " And fourth, fire can not only spread horizontally from 3 to 3, but vertically too."}, {"start": 225.7, "end": 230.9, "text": " Look, when a small tree catches fire, this can happen."}, {"start": 230.9, "end": 237.9, "text": " So, as we established from the previous paper, one tree catching fire can be simulated in real time."}, {"start": 237.9, "end": 241.1, "text": " So, what about an entire forest?"}, {"start": 241.1, "end": 244.5, "text": " Let's take the simulation with the most number of trees."}, {"start": 244.5, "end": 249.70000000000002, "text": " My goodness, they simulated 120k trees here."}, {"start": 249.70000000000002, "end": 255.5, "text": " And the computation time for one simulation step was 95."}, {"start": 255.5, "end": 258.1, "text": " So, 95 what?"}, {"start": 258.1, "end": 260.6, "text": " 95 milliseconds."}, {"start": 260.6, "end": 261.5, "text": " Wow!"}, {"start": 261.5, "end": 269.1, "text": " So, this thing runs interactively, which means that all of these phenomena can be simulated in close to real time."}, {"start": 269.1, "end": 274.9, "text": " With that, we can now model how a fire would spread in real forests around the world,"}, {"start": 274.9, "end": 279.1, "text": " test different kinds of fire barriers and their advantages,"}, {"start": 279.1, "end": 283.7, "text": " and we can even simulate how to effectively put out the fire."}, {"start": 283.7, "end": 290.1, "text": " And don't forget, we went from simulating one burning tree to 120k"}, {"start": 290.1, "end": 293.1, "text": " in just one more paper down the line."}, {"start": 293.1, "end": 295.1, "text": " What a time to be alive!"}, {"start": 295.1, "end": 298.3, "text": " This video has been supported by weights and biases."}, {"start": 298.3, "end": 302.1, "text": " They have an amazing podcast by the name Gradient Descent,"}, {"start": 302.1, "end": 309.90000000000003, "text": " where they interview machine learning experts who discuss how they use learning-based algorithms to solve real-world problems."}, {"start": 309.90000000000003, "end": 317.3, "text": " They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more."}, {"start": 317.3, "end": 320.3, "text": " Perfect for a fellow scholar with an open mind."}, {"start": 320.3, "end": 325.3, "text": " Make sure to visit them through wnb.me slash gd"}, {"start": 325.3, "end": 327.90000000000003, "text": " or just click the link in the video description."}, {"start": 327.90000000000003, "end": 331.3, "text": " Our thanks to weights and biases for their long-standing support"}, {"start": 331.3, "end": 334.3, "text": " and for helping us make better videos for you."}, {"start": 334.3, "end": 347.3, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=2qqDwaZlkE0
Glitter Simulation, Now Faster Than Ever! ✨
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/getting-started/reports/Debug-Compare-Reproduce-Machine-Learning-Models--VmlldzoyNzY5MDk?utm_source=karoly 📝 The paper "Slope-Space Integrals for Specular Next Event Estimation" is available here: https://rgl.epfl.ch/publications/Loubet2020Slope ☀️ Free rendering course: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ 🔮 Paper with the difficult scene: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehr. I am a light transport researcher by trade and due to popular request, today I am delighted to show you these beautiful results from a research paper. This is a new light simulation technique that can create an image like this. This looks almost exactly like reality. It can also make an image like this. And have a look at this. This is a virtual object that glitters. Oh my goodness, absolutely beautiful, right? Well, believe it or not, this third result is wrong. Now you see it is not so bad, but there is a flaw in it somewhere. By the end of this video you will know exactly where and what is wrong. So light transport day, how do these techniques work anyway? We can create such an image by simulating the path of millions and millions of light rays. And initially this image will look noisy and as we add more and more rays, this image will slowly clean up over time. If we don't have a well optimized program, this can take from hours to days to compute for difficult scenes. For instance, this difficult scene took us several weeks to compute. Okay, so what makes a scene difficult? Typically, caustics and specular light transport. What does that mean? Look, here we have a caustic pattern that takes many many millions if not billions of light rays to compute properly. This can get tricky because these are light paths that we are very unlikely to hit with randomly generated rays. So how do we solve this problem? Well, one way of doing it is not trusting random light rays, but systematically finding these caustic light paths and computing them. This fantastic paper does exactly that. So let's look at one of those classic close-ups that are the hallmark of any modern light transport paper. Let's see. Yes. On this scene you see beautiful caustic patterns under these glossy metallic objects. Let's see what a simple random algorithm can do with this with an allowance of two minutes of rendering time. Well, do you see any caustics here? Do you see any bright points? These are the first signs of the algorithm finding small point samples of the caustic pattern, but that's about it. It would take at the very least several days for this algorithm to compute the entirety of it. This is what the fully rendered reference image looks like. This is the one that takes forever to compute. Quite different, right? Let's allocate two minutes of our time for the new method and see how well it does. Which one will it be closer to? Can it beat the naive algorithm? Now hold onto your papers and let's see together. But on this part it looks almost exactly the same as the reference. This is insanity. A converged caustic region in two minutes. Whoa! The green close-up is also nearly completely done. Now, not everything is sunshine and rainbows. Look, the blue close-up is still a bit behind, but it still beats the naive algorithm handily. That is quite something. And yes, it can also render these beautiful underwater caustics as well in as little as five minutes. Five minutes. And I would not be surprised if many people would think this is an actual photograph from the real world. Loving it. Now, what about the glittery origami scene from the start of the video? This one. Was that footage really wrong? Yes, it was. Why? Well, look here. These glittery patterns are unstable. The effect is especially pronounced around here. This arises from the fact that the technique does not take into consideration the curvature of this object correctly when computing the image. Let's look at the corrected version and... Oh my goodness. No unnecessary flickering anywhere to be seen, just the beautiful glitter slowly changing as we rotate the object around. I could stare at this all day. Now, note that these kinds of glints are much more practical than most people would think. For instance, it also has a really pronounced effect when rendering a vinyl record and many other materials as well. So, from now on, we can render photorealistic images of difficult scenes with caustics and glitter not in a matter of days, but in a matter of minutes. What a time to be alive. And when watching all these beautiful results, if you are thinking that this light transport thing is pretty cool and you would like to learn more about it, I held a master-level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone, that's what I want. So, the course is available free of charge for everyone, no strings attached, so make sure to click the link in the video description to get started. We write a full-light simulation program from scratch there and learn about physics, the world around us, and more. This episode has been supported by weights and biases. In this post, they show you how you can get an email or select notification when your model crashes. With this, you can check on your model performance on any device. Heavenly. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karojona Ifehr."}, {"start": 4.8, "end": 11.200000000000001, "text": " I am a light transport researcher by trade and due to popular request, today I am delighted"}, {"start": 11.200000000000001, "end": 15.6, "text": " to show you these beautiful results from a research paper."}, {"start": 15.6, "end": 20.68, "text": " This is a new light simulation technique that can create an image like this."}, {"start": 20.68, "end": 24.28, "text": " This looks almost exactly like reality."}, {"start": 24.28, "end": 28.6, "text": " It can also make an image like this."}, {"start": 28.6, "end": 31.400000000000002, "text": " And have a look at this."}, {"start": 31.400000000000002, "end": 34.4, "text": " This is a virtual object that glitters."}, {"start": 34.4, "end": 39.0, "text": " Oh my goodness, absolutely beautiful, right?"}, {"start": 39.0, "end": 43.760000000000005, "text": " Well, believe it or not, this third result is wrong."}, {"start": 43.760000000000005, "end": 48.88, "text": " Now you see it is not so bad, but there is a flaw in it somewhere."}, {"start": 48.88, "end": 53.92, "text": " By the end of this video you will know exactly where and what is wrong."}, {"start": 53.92, "end": 58.480000000000004, "text": " So light transport day, how do these techniques work anyway?"}, {"start": 58.48, "end": 64.6, "text": " We can create such an image by simulating the path of millions and millions of light rays."}, {"start": 64.6, "end": 70.72, "text": " And initially this image will look noisy and as we add more and more rays, this image"}, {"start": 70.72, "end": 73.8, "text": " will slowly clean up over time."}, {"start": 73.8, "end": 79.03999999999999, "text": " If we don't have a well optimized program, this can take from hours to days to compute"}, {"start": 79.03999999999999, "end": 80.88, "text": " for difficult scenes."}, {"start": 80.88, "end": 85.56, "text": " For instance, this difficult scene took us several weeks to compute."}, {"start": 85.56, "end": 89.64, "text": " Okay, so what makes a scene difficult?"}, {"start": 89.64, "end": 94.56, "text": " Typically, caustics and specular light transport."}, {"start": 94.56, "end": 95.96000000000001, "text": " What does that mean?"}, {"start": 95.96000000000001, "end": 102.12, "text": " Look, here we have a caustic pattern that takes many many millions if not billions of light"}, {"start": 102.12, "end": 104.56, "text": " rays to compute properly."}, {"start": 104.56, "end": 109.16, "text": " This can get tricky because these are light paths that we are very unlikely to hit with"}, {"start": 109.16, "end": 111.36, "text": " randomly generated rays."}, {"start": 111.36, "end": 113.72, "text": " So how do we solve this problem?"}, {"start": 113.72, "end": 120.0, "text": " Well, one way of doing it is not trusting random light rays, but systematically finding"}, {"start": 120.0, "end": 123.76, "text": " these caustic light paths and computing them."}, {"start": 123.76, "end": 126.96, "text": " This fantastic paper does exactly that."}, {"start": 126.96, "end": 132.16, "text": " So let's look at one of those classic close-ups that are the hallmark of any modern light"}, {"start": 132.16, "end": 134.0, "text": " transport paper."}, {"start": 134.0, "end": 135.0, "text": " Let's see."}, {"start": 135.0, "end": 136.0, "text": " Yes."}, {"start": 136.0, "end": 142.28, "text": " On this scene you see beautiful caustic patterns under these glossy metallic objects."}, {"start": 142.28, "end": 147.56, "text": " Let's see what a simple random algorithm can do with this with an allowance of two minutes"}, {"start": 147.56, "end": 149.36, "text": " of rendering time."}, {"start": 149.36, "end": 153.44, "text": " Well, do you see any caustics here?"}, {"start": 153.44, "end": 155.92000000000002, "text": " Do you see any bright points?"}, {"start": 155.92000000000002, "end": 161.72, "text": " These are the first signs of the algorithm finding small point samples of the caustic pattern,"}, {"start": 161.72, "end": 163.84, "text": " but that's about it."}, {"start": 163.84, "end": 169.24, "text": " It would take at the very least several days for this algorithm to compute the entirety"}, {"start": 169.24, "end": 170.8, "text": " of it."}, {"start": 170.8, "end": 174.52, "text": " This is what the fully rendered reference image looks like."}, {"start": 174.52, "end": 177.88000000000002, "text": " This is the one that takes forever to compute."}, {"start": 177.88000000000002, "end": 180.12, "text": " Quite different, right?"}, {"start": 180.12, "end": 185.88000000000002, "text": " Let's allocate two minutes of our time for the new method and see how well it does."}, {"start": 185.88000000000002, "end": 187.88000000000002, "text": " Which one will it be closer to?"}, {"start": 187.88000000000002, "end": 190.52, "text": " Can it beat the naive algorithm?"}, {"start": 190.52, "end": 196.4, "text": " Now hold onto your papers and let's see together."}, {"start": 196.4, "end": 202.6, "text": " But on this part it looks almost exactly the same as the reference."}, {"start": 202.6, "end": 204.28, "text": " This is insanity."}, {"start": 204.28, "end": 208.0, "text": " A converged caustic region in two minutes."}, {"start": 208.0, "end": 209.4, "text": " Whoa!"}, {"start": 209.4, "end": 213.0, "text": " The green close-up is also nearly completely done."}, {"start": 213.0, "end": 216.12, "text": " Now, not everything is sunshine and rainbows."}, {"start": 216.12, "end": 222.28, "text": " Look, the blue close-up is still a bit behind, but it still beats the naive algorithm"}, {"start": 222.28, "end": 223.8, "text": " handily."}, {"start": 223.8, "end": 225.96, "text": " That is quite something."}, {"start": 225.96, "end": 231.84, "text": " And yes, it can also render these beautiful underwater caustics as well in as little as"}, {"start": 231.84, "end": 233.52, "text": " five minutes."}, {"start": 233.52, "end": 235.36, "text": " Five minutes."}, {"start": 235.36, "end": 240.08, "text": " And I would not be surprised if many people would think this is an actual photograph"}, {"start": 240.08, "end": 241.96, "text": " from the real world."}, {"start": 241.96, "end": 242.96, "text": " Loving it."}, {"start": 242.96, "end": 248.44, "text": " Now, what about the glittery origami scene from the start of the video?"}, {"start": 248.44, "end": 249.44, "text": " This one."}, {"start": 249.44, "end": 251.72, "text": " Was that footage really wrong?"}, {"start": 251.72, "end": 253.52, "text": " Yes, it was."}, {"start": 253.52, "end": 254.52, "text": " Why?"}, {"start": 254.52, "end": 256.56, "text": " Well, look here."}, {"start": 256.56, "end": 259.32, "text": " These glittery patterns are unstable."}, {"start": 259.32, "end": 262.96000000000004, "text": " The effect is especially pronounced around here."}, {"start": 262.96000000000004, "end": 268.04, "text": " This arises from the fact that the technique does not take into consideration the curvature"}, {"start": 268.04, "end": 271.32, "text": " of this object correctly when computing the image."}, {"start": 271.32, "end": 273.96000000000004, "text": " Let's look at the corrected version and..."}, {"start": 273.96000000000004, "end": 277.08, "text": " Oh my goodness."}, {"start": 277.08, "end": 283.2, "text": " No unnecessary flickering anywhere to be seen, just the beautiful glitter slowly changing"}, {"start": 283.2, "end": 285.59999999999997, "text": " as we rotate the object around."}, {"start": 285.59999999999997, "end": 288.2, "text": " I could stare at this all day."}, {"start": 288.2, "end": 293.92, "text": " Now, note that these kinds of glints are much more practical than most people would think."}, {"start": 293.92, "end": 299.96, "text": " For instance, it also has a really pronounced effect when rendering a vinyl record and many"}, {"start": 299.96, "end": 301.96, "text": " other materials as well."}, {"start": 301.96, "end": 308.0, "text": " So, from now on, we can render photorealistic images of difficult scenes with caustics and"}, {"start": 308.0, "end": 313.68, "text": " glitter not in a matter of days, but in a matter of minutes."}, {"start": 313.68, "end": 315.96, "text": " What a time to be alive."}, {"start": 315.96, "end": 320.32, "text": " And when watching all these beautiful results, if you are thinking that this light transport"}, {"start": 320.32, "end": 325.36, "text": " thing is pretty cool and you would like to learn more about it, I held a master-level"}, {"start": 325.36, "end": 329.36, "text": " course on this topic at the Technical University of Vienna."}, {"start": 329.36, "end": 334.2, "text": " Since I was always teaching it to a handful of motivated students, I thought that the teachings"}, {"start": 334.2, "end": 339.8, "text": " shouldn't only be available for the privileged few who can afford a college education,"}, {"start": 339.8, "end": 343.28, "text": " but the teachings should be available for everyone."}, {"start": 343.28, "end": 346.2, "text": " Free education for everyone, that's what I want."}, {"start": 346.2, "end": 351.56, "text": " So, the course is available free of charge for everyone, no strings attached, so make"}, {"start": 351.56, "end": 355.15999999999997, "text": " sure to click the link in the video description to get started."}, {"start": 355.15999999999997, "end": 360.03999999999996, "text": " We write a full-light simulation program from scratch there and learn about physics, the"}, {"start": 360.03999999999996, "end": 362.48, "text": " world around us, and more."}, {"start": 362.48, "end": 365.56, "text": " This episode has been supported by weights and biases."}, {"start": 365.56, "end": 370.12, "text": " In this post, they show you how you can get an email or select notification when your"}, {"start": 370.12, "end": 371.76, "text": " model crashes."}, {"start": 371.76, "end": 375.56, "text": " With this, you can check on your model performance on any device."}, {"start": 375.56, "end": 376.56, "text": " Heavenly."}, {"start": 376.56, "end": 381.32, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 381.32, "end": 386.08000000000004, "text": " Their system is designed to save you a ton of time and money and it is actively used"}, {"start": 386.08000000000004, "end": 392.28000000000003, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 392.28, "end": 397.47999999999996, "text": " And the best part is that weights and biases is free for all individuals, academics, and"}, {"start": 397.47999999999996, "end": 398.96, "text": " open source projects."}, {"start": 398.96, "end": 401.59999999999997, "text": " It really is as good as it gets."}, {"start": 401.59999999999997, "end": 407.28, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 407.28, "end": 410.35999999999996, "text": " description and you can get a free demo today."}, {"start": 410.35999999999996, "end": 414.76, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 414.76, "end": 416.23999999999995, "text": " better videos for you."}, {"start": 416.24, "end": 425.36, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=SEsYo9L5lOo
Google’s New AI Puts Video Calls On Steroids! 💪
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Total Relighting: Learning to Relight Portraits for Background Replacement" is available here: https://augmentedperception.github.io/total_relighting/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifehir, with the increased popularity of online meetings, telepresence applications, or on their eyes where we can talk to each other from afar. Today, let's see how these powerful, new neural network-based learning methods can be applied to them. It turns out they can help us do everyone's favorite, which is showing up to a meeting and changing our background to pretend we are somewhere else. Now that is a deceptively difficult problem. Here the background has been changed, that is the easier problem, but look, the lighting of the new environment hasn't been applied to the subject. And now hold on to your papers and check this out. This is the result of the new technique after it recreates the image as if she was really there. Particularly like the fact that the results include high quality specular highlights too, or in other words, the environment reflecting off of our skin. However, of course, this is not the first method attempting this. So, let's see how it performs compared to the competition. These techniques are from one and two years ago, and they don't perform so well. Not only did they lose a lot of detail all across the image, but the specular highlights are gone. As a result, the image feels more like a video game character than a real person. Luckily, the authors also have access to the reference information to make our job comparing the results easier. Roughly speaking, the more the outputs look like this, the better. So now hold on to your papers and let's see how the new method performed. Oh yes, now we're talking. Now of course, not even this is perfect. Clearly, the specularity of clothing was determined incorrectly, and the metting around the thinner parts of the hair could be better, which is notoriously difficult to get right. But this is a huge step forward in just one paper. And we are not nearly done. There are two more things that are found to be remarkable about this work. One is that the whole method was trained on still images, yet it still works on video too. And we don't have any apparent tempero coherence issues, or in other words, no flickering arises from the fact that it processes the video not as a video, but a series of separate images. Very cool. Two, if we are in a meeting with someone and we really like their background, we can simply borrow it. Look, this technique can take their image, get the background out, estimate its lighting, and give the whole package to us too. I think this will be a game changer. People may start to become more selective with these backgrounds, not just because of how the background looks, but because how it makes them look. Remember, lighting off of a well chosen background makes a great deal of a difference in our appearance in the real world, and now with this method, in virtual worlds too. And this will likely happen not decades from now, but in the near future. So, this new method is clearly capable of some serious magic. But how? What is going on under the hood to achieve this? This method performs two important steps to accomplish this. Step number one is matting. This means separating the foreground from the background, and then, if done well, we can now easily cut out the background and also have the subject on a separate layer and proceed to step number two, which is, relighting. In this step, the goal is to estimate the illumination of the new scene and recalor the subject as if she were really there. This new technique performs both, but most of the contributions lie in this step. To be able to accomplish this, we have to be able to estimate the material properties of the subject. The technique has to know one, where the diffuse parts are, these are the parts that don't change too much as the lighting changes, and two, where the specular parts are, in other words, shiny regions that reflect back the environment more clearly. Putting it all together, we get really high quality relighting for ourselves, and, given that this was developed by Google, I expect that this was supercharger meetings quite soon. And just imagine what we will have two more papers down the line. My goodness! What a time to be alive! This video has been supported by weights and biases, check out the recent offering, fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is! Simply connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 5.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifehir, with the increased"}, {"start": 5.8, "end": 11.4, "text": " popularity of online meetings, telepresence applications, or on their eyes where we can"}, {"start": 11.4, "end": 13.4, "text": " talk to each other from afar."}, {"start": 13.4, "end": 19.64, "text": " Today, let's see how these powerful, new neural network-based learning methods can be applied"}, {"start": 19.64, "end": 20.64, "text": " to them."}, {"start": 20.64, "end": 27.76, "text": " It turns out they can help us do everyone's favorite, which is showing up to a meeting and"}, {"start": 27.76, "end": 31.92, "text": " changing our background to pretend we are somewhere else."}, {"start": 31.92, "end": 35.24, "text": " Now that is a deceptively difficult problem."}, {"start": 35.24, "end": 41.72, "text": " Here the background has been changed, that is the easier problem, but look, the lighting"}, {"start": 41.72, "end": 46.040000000000006, "text": " of the new environment hasn't been applied to the subject."}, {"start": 46.040000000000006, "end": 49.92, "text": " And now hold on to your papers and check this out."}, {"start": 49.92, "end": 55.760000000000005, "text": " This is the result of the new technique after it recreates the image as if she was really"}, {"start": 55.760000000000005, "end": 56.760000000000005, "text": " there."}, {"start": 56.76, "end": 62.32, "text": " Particularly like the fact that the results include high quality specular highlights too,"}, {"start": 62.32, "end": 66.75999999999999, "text": " or in other words, the environment reflecting off of our skin."}, {"start": 66.75999999999999, "end": 71.32, "text": " However, of course, this is not the first method attempting this."}, {"start": 71.32, "end": 75.88, "text": " So, let's see how it performs compared to the competition."}, {"start": 75.88, "end": 82.68, "text": " These techniques are from one and two years ago, and they don't perform so well."}, {"start": 82.68, "end": 88.48, "text": " Not only did they lose a lot of detail all across the image, but the specular highlights"}, {"start": 88.48, "end": 89.84, "text": " are gone."}, {"start": 89.84, "end": 95.48, "text": " As a result, the image feels more like a video game character than a real person."}, {"start": 95.48, "end": 101.08000000000001, "text": " Luckily, the authors also have access to the reference information to make our job comparing"}, {"start": 101.08000000000001, "end": 103.32000000000001, "text": " the results easier."}, {"start": 103.32000000000001, "end": 107.80000000000001, "text": " Roughly speaking, the more the outputs look like this, the better."}, {"start": 107.8, "end": 112.92, "text": " So now hold on to your papers and let's see how the new method performed."}, {"start": 112.92, "end": 116.44, "text": " Oh yes, now we're talking."}, {"start": 116.44, "end": 119.6, "text": " Now of course, not even this is perfect."}, {"start": 119.6, "end": 126.4, "text": " Clearly, the specularity of clothing was determined incorrectly, and the metting around the thinner"}, {"start": 126.4, "end": 132.04, "text": " parts of the hair could be better, which is notoriously difficult to get right."}, {"start": 132.04, "end": 136.51999999999998, "text": " But this is a huge step forward in just one paper."}, {"start": 136.52, "end": 138.20000000000002, "text": " And we are not nearly done."}, {"start": 138.20000000000002, "end": 143.16000000000003, "text": " There are two more things that are found to be remarkable about this work."}, {"start": 143.16000000000003, "end": 149.04000000000002, "text": " One is that the whole method was trained on still images, yet it still works on video"}, {"start": 149.04000000000002, "end": 150.36, "text": " too."}, {"start": 150.36, "end": 156.32000000000002, "text": " And we don't have any apparent tempero coherence issues, or in other words, no flickering"}, {"start": 156.32000000000002, "end": 162.84, "text": " arises from the fact that it processes the video not as a video, but a series of separate"}, {"start": 162.84, "end": 163.84, "text": " images."}, {"start": 163.84, "end": 164.84, "text": " Very cool."}, {"start": 164.84, "end": 170.20000000000002, "text": " Two, if we are in a meeting with someone and we really like their background, we can"}, {"start": 170.20000000000002, "end": 172.56, "text": " simply borrow it."}, {"start": 172.56, "end": 179.24, "text": " Look, this technique can take their image, get the background out, estimate its lighting,"}, {"start": 179.24, "end": 181.8, "text": " and give the whole package to us too."}, {"start": 181.8, "end": 185.0, "text": " I think this will be a game changer."}, {"start": 185.0, "end": 189.2, "text": " People may start to become more selective with these backgrounds, not just because of how"}, {"start": 189.2, "end": 193.96, "text": " the background looks, but because how it makes them look."}, {"start": 193.96, "end": 199.08, "text": " Remember, lighting off of a well chosen background makes a great deal of a difference in our"}, {"start": 199.08, "end": 204.76000000000002, "text": " appearance in the real world, and now with this method, in virtual worlds too."}, {"start": 204.76000000000002, "end": 209.64000000000001, "text": " And this will likely happen not decades from now, but in the near future."}, {"start": 209.64000000000001, "end": 214.88, "text": " So, this new method is clearly capable of some serious magic."}, {"start": 214.88, "end": 216.20000000000002, "text": " But how?"}, {"start": 216.20000000000002, "end": 219.48000000000002, "text": " What is going on under the hood to achieve this?"}, {"start": 219.48, "end": 223.67999999999998, "text": " This method performs two important steps to accomplish this."}, {"start": 223.67999999999998, "end": 225.84, "text": " Step number one is matting."}, {"start": 225.84, "end": 230.88, "text": " This means separating the foreground from the background, and then, if done well, we"}, {"start": 230.88, "end": 236.6, "text": " can now easily cut out the background and also have the subject on a separate layer and"}, {"start": 236.6, "end": 241.16, "text": " proceed to step number two, which is, relighting."}, {"start": 241.16, "end": 247.76, "text": " In this step, the goal is to estimate the illumination of the new scene and recalor the subject"}, {"start": 247.76, "end": 250.32, "text": " as if she were really there."}, {"start": 250.32, "end": 256.24, "text": " This new technique performs both, but most of the contributions lie in this step."}, {"start": 256.24, "end": 260.92, "text": " To be able to accomplish this, we have to be able to estimate the material properties"}, {"start": 260.92, "end": 262.36, "text": " of the subject."}, {"start": 262.36, "end": 267.84, "text": " The technique has to know one, where the diffuse parts are, these are the parts that don't"}, {"start": 267.84, "end": 274.15999999999997, "text": " change too much as the lighting changes, and two, where the specular parts are, in other"}, {"start": 274.16, "end": 279.48, "text": " words, shiny regions that reflect back the environment more clearly."}, {"start": 279.48, "end": 284.72, "text": " Putting it all together, we get really high quality relighting for ourselves, and, given"}, {"start": 284.72, "end": 291.28000000000003, "text": " that this was developed by Google, I expect that this was supercharger meetings quite soon."}, {"start": 291.28000000000003, "end": 295.72, "text": " And just imagine what we will have two more papers down the line."}, {"start": 295.72, "end": 297.40000000000003, "text": " My goodness!"}, {"start": 297.40000000000003, "end": 299.12, "text": " What a time to be alive!"}, {"start": 299.12, "end": 304.52, "text": " This video has been supported by weights and biases, check out the recent offering, fully"}, {"start": 304.52, "end": 310.0, "text": " connected, a place where they bring machine learning practitioners together to share and"}, {"start": 310.0, "end": 316.96, "text": " discuss their ideas, learn from industry leaders, and even collaborate on projects together."}, {"start": 316.96, "end": 321.84000000000003, "text": " You see, I get messages from you fellow scholars telling me that you have been inspired by"}, {"start": 321.84000000000003, "end": 326.4, "text": " the series, but don't really know where to start."}, {"start": 326.4, "end": 327.92, "text": " And here it is!"}, {"start": 327.92, "end": 333.6, "text": " Simply connected is a great way to learn about the fundamentals, how to reproduce experiments,"}, {"start": 333.6, "end": 337.48, "text": " get your papers accepted to a conference, and more."}, {"start": 337.48, "end": 343.56, "text": " Make sure to visit them through wnb.me slash papers, or just click the link in the video"}, {"start": 343.56, "end": 344.72, "text": " description."}, {"start": 344.72, "end": 349.76, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make"}, {"start": 349.76, "end": 351.16, "text": " better videos for you."}, {"start": 351.16, "end": 358.16, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=rSPwOeX46UA
This is Grammar For Robots. What? Why? 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "RoboGrammar: Graph Grammar for Terrain-Optimized Robot Design " is available here: https://people.csail.mit.edu/jiex/papers/robogrammar/index.html Breakdancing robot paper: http://moghs.csail.mit.edu/ Building grammar paper: https://www.cg.tuwien.ac.at/research/publications/2015/Ilcik_2015_LAY/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajol Leifahir. Today we are going to generate robots with grammars. Wait a second, grammars of all things. What do grammars have to do with robots? Do we need to teach them grammar to speak correctly? No, no, of course not. To answer these questions, let's invoke the second law of papers, which says that whatever you are thinking about, there is already a Two Minute Papers episode on that. Even on grammars. Let's see if it applies here. In this earlier work, we talked about generating buildings with grammars. So, how does that work? Grammars are a set of rules that tell us how to build up a structure such as a sentence properly from small elements like nouns, adjectives, and so on. My friend Martin Elchick loves to build buildings from grammars. For instance, a shape grammar for buildings can describe rules like a wall can contain several windows. Below a window goes a windows seal. One wall may have at most two doors attached and so on. A later paper also used a similar concept to generate tangle patterns. So, this grammar thing has some power in assembling things after all. So, can we apply this knowledge to build robots? First, the robots in this new paper are built up as a collection of these joint types, links, and wheels, which can come in all kinds of sizes and weights. Now, our question is how do we assemble them in a way that they can traverse a given terrain effectively? Well, time for some experiments. Look at this robot. It has a lot of character and can deal with this terrain pretty well. Now, look at this poor thing. Someone in the lab at MIT had a super fun day with this one I am sure. Now, this can sort of do the job, but now let's see the power of grammars and search algorithms in creating more optimized robots for a variety of the reins. First, a flat terrain. Let's see. Yes, now we're talking. This one is traversing at great speed. And this one works too. I like how it was able to find vastly different robot structures that both perform well here. Now, let's look at a little harder level with gaped terrains. Look. Oh, wow. Loving this. The algorithm recognized that a more rigid body is required to efficiently step through the gaps. And now, I wonder what happens if we add some ridges to the levels, so it cannot only step through the gaps, but has to climb. Let's see. And we get those long limbs that can indeed climb through the ridges. Excellent. Now, add the staircase and see who can climb these well. The algorithm says, well, someone with long arms and a somewhat elastic body. Let's challenge the algorithm some more. Let's add, for instance, a frozen lake. Who can climb a flat surface that is really slippery? Does the algorithm know? Look, it says someone who can utilize a low friction surface by dragging itself through it. Or someone with many legs. Loving this. Now, this is way too much fun. So, let's do two more. What about a world terrain example? What kind of robot would work there? One with a more elastic body, carefully designed to be able to curve sharply, enabling rapid direction changes. But it cannot be too long or else it would bang its head into the wall. This is indeed a carefully crafted specimen for this particular level. Now, of course, real-world situations often involve multiple kinds of terrains, not just one. And of course, the authors of this paper know that very well and also ask the algorithm to design a specimen that can traverse world and rich terrains really well. Make sure to have a look at the paper which even shows graphs for robot archetypes that work on different terrains. It turns out one can even make claims about the optimality, which is a strong statement. I did not expect that at all. So, apparently, grammars are amazing at generating many kinds of complex structures, including robots. And note that this paper also has a follow-up work from the same group, where they took this step further and made figure skating and break-dancing robots. What a time to be alive! The link is also available in the video description for that one. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive, Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajol Leifahir."}, {"start": 4.8, "end": 8.64, "text": " Today we are going to generate robots with grammars."}, {"start": 9.52, "end": 15.36, "text": " Wait a second, grammars of all things. What do grammars have to do with robots?"}, {"start": 15.92, "end": 18.48, "text": " Do we need to teach them grammar to speak correctly?"}, {"start": 19.36, "end": 25.68, "text": " No, no, of course not. To answer these questions, let's invoke the second law of papers,"}, {"start": 25.68, "end": 31.6, "text": " which says that whatever you are thinking about, there is already a Two Minute Papers episode on that."}, {"start": 32.32, "end": 39.04, "text": " Even on grammars. Let's see if it applies here. In this earlier work, we talked about generating"}, {"start": 39.04, "end": 47.120000000000005, "text": " buildings with grammars. So, how does that work? Grammars are a set of rules that tell us how to build"}, {"start": 47.120000000000005, "end": 55.2, "text": " up a structure such as a sentence properly from small elements like nouns, adjectives, and so on."}, {"start": 55.2, "end": 61.2, "text": " My friend Martin Elchick loves to build buildings from grammars. For instance, a shape grammar for"}, {"start": 61.2, "end": 68.96000000000001, "text": " buildings can describe rules like a wall can contain several windows. Below a window goes a windows"}, {"start": 68.96000000000001, "end": 77.12, "text": " seal. One wall may have at most two doors attached and so on. A later paper also used a similar"}, {"start": 77.12, "end": 84.56, "text": " concept to generate tangle patterns. So, this grammar thing has some power in assembling things after all."}, {"start": 85.2, "end": 92.88000000000001, "text": " So, can we apply this knowledge to build robots? First, the robots in this new paper are built up"}, {"start": 92.88000000000001, "end": 100.24000000000001, "text": " as a collection of these joint types, links, and wheels, which can come in all kinds of sizes and weights."}, {"start": 100.24, "end": 107.03999999999999, "text": " Now, our question is how do we assemble them in a way that they can traverse a given terrain"}, {"start": 107.03999999999999, "end": 115.36, "text": " effectively? Well, time for some experiments. Look at this robot. It has a lot of character"}, {"start": 115.36, "end": 123.67999999999999, "text": " and can deal with this terrain pretty well. Now, look at this poor thing. Someone in the lab at MIT"}, {"start": 123.68, "end": 131.52, "text": " had a super fun day with this one I am sure. Now, this can sort of do the job, but now let's see"}, {"start": 131.52, "end": 138.16, "text": " the power of grammars and search algorithms in creating more optimized robots for a variety of"}, {"start": 138.16, "end": 147.20000000000002, "text": " the reins. First, a flat terrain. Let's see. Yes, now we're talking. This one is traversing at"}, {"start": 147.2, "end": 155.28, "text": " great speed. And this one works too. I like how it was able to find vastly different robot structures"}, {"start": 155.28, "end": 162.32, "text": " that both perform well here. Now, let's look at a little harder level with gaped terrains."}, {"start": 163.28, "end": 170.95999999999998, "text": " Look. Oh, wow. Loving this. The algorithm recognized that a more rigid body is required to"}, {"start": 170.96, "end": 180.32, "text": " efficiently step through the gaps. And now, I wonder what happens if we add some ridges to the levels,"}, {"start": 180.96, "end": 190.16, "text": " so it cannot only step through the gaps, but has to climb. Let's see. And we get those long limbs"}, {"start": 190.16, "end": 198.32, "text": " that can indeed climb through the ridges. Excellent. Now, add the staircase and see who can climb these"}, {"start": 198.32, "end": 206.07999999999998, "text": " well. The algorithm says, well, someone with long arms and a somewhat elastic body."}, {"start": 211.68, "end": 216.72, "text": " Let's challenge the algorithm some more. Let's add, for instance, a frozen lake."}, {"start": 217.76, "end": 225.35999999999999, "text": " Who can climb a flat surface that is really slippery? Does the algorithm know? Look, it says"}, {"start": 225.36, "end": 230.64000000000001, "text": " someone who can utilize a low friction surface by dragging itself through it."}, {"start": 232.72000000000003, "end": 241.12, "text": " Or someone with many legs. Loving this. Now, this is way too much fun. So, let's do two more."}, {"start": 241.84, "end": 248.88000000000002, "text": " What about a world terrain example? What kind of robot would work there? One with a more elastic"}, {"start": 248.88, "end": 256.56, "text": " body, carefully designed to be able to curve sharply, enabling rapid direction changes. But it"}, {"start": 256.56, "end": 263.36, "text": " cannot be too long or else it would bang its head into the wall. This is indeed a carefully"}, {"start": 263.36, "end": 270.08, "text": " crafted specimen for this particular level. Now, of course, real-world situations often involve"}, {"start": 270.08, "end": 276.64, "text": " multiple kinds of terrains, not just one. And of course, the authors of this paper know that very"}, {"start": 276.64, "end": 284.71999999999997, "text": " well and also ask the algorithm to design a specimen that can traverse world and rich terrains"}, {"start": 284.71999999999997, "end": 290.47999999999996, "text": " really well. Make sure to have a look at the paper which even shows graphs for robot"}, {"start": 290.47999999999996, "end": 296.47999999999996, "text": " archetypes that work on different terrains. It turns out one can even make claims about the"}, {"start": 296.47999999999996, "end": 304.08, "text": " optimality, which is a strong statement. I did not expect that at all. So, apparently,"}, {"start": 304.08, "end": 309.76, "text": " grammars are amazing at generating many kinds of complex structures, including robots."}, {"start": 310.4, "end": 315.2, "text": " And note that this paper also has a follow-up work from the same group,"}, {"start": 315.2, "end": 321.12, "text": " where they took this step further and made figure skating and break-dancing robots."}, {"start": 321.84, "end": 326.96, "text": " What a time to be alive! The link is also available in the video description for that one."}, {"start": 327.52, "end": 332.71999999999997, "text": " This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive,"}, {"start": 332.72, "end": 340.24, "text": " Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000,"}, {"start": 340.24, "end": 348.40000000000003, "text": " RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less"}, {"start": 348.40000000000003, "end": 356.72, "text": " than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 356.72, "end": 363.28000000000003, "text": " Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 363.28000000000003, "end": 369.76000000000005, "text": " workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their"}, {"start": 369.76000000000005, "end": 374.56, "text": " amazing GPU instances today. Our thanks to Lambda for their long-standing support"}, {"start": 374.56, "end": 379.6, "text": " and for helping us make better videos for you. Thanks for watching and for your generous support,"}, {"start": 379.6, "end": 389.6, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=vx7H7GrE5KA
Can An AI Heal This Image?👩‍⚕️
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/getting-started/reports/Debug-Compare-Reproduce-Machine-Learning-Models--VmlldzoyNzY5MDk?utm_source=karoly 📝 The paper "Self-Organising Textures" is available here: https://distill.pub/selforg/2021/textures/ Game of Life animation source: https://copy.sh/life/ Game of Life image source: https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karajone Fahir. Today, we are going to play with a cellular automaton and fuse it together with the neural network. It will be quite an experience. But of course, first, what are these things anyway? You can imagine a cellular automaton as a small game where we have a bunch of cells and a set of simple rules that describe when a cell should be full and when it should be empty. What you see here is a popular example called Game of Life, which simulates a tiny world where each cell represents a live form. So, why is this so interesting? Well, this cellular automaton shows us that even a small set of simple rules can give rise to remarkably complex live forms such as gliders, spaceships, and even John von Neumann's Universal Constructor or in other words, self-replicating machines. Now, this gets more interesting, a later paper fused the cellular automaton with a neural network. It was tasked to grow and even better maintain a prescribed shape. Remember these two words, grow and maintain shape. And the question was if it can recover from undesirable states, can it perhaps regenerate when damaged? Well, here you will see all kinds of damage and then this happens. Nice! The best part is that this thing wasn't even trained to be able to perform this kind of regeneration. The objective for training was that it should be able to perform its task of growing and maintaining shape and it turns out some sort of regeneration is included in that. This sounds very promising and I wonder if we can apply this concept to something where healing is instrumental. Are there such applications in computer science? If so, what could those be? Oh yes, yes there are. For instance, think about texture synthesis. This is a topic that is subject to a great deal of research in computer graphics and those folks have this down to a science. So, what are we doing here? Texture synthesis typically means that we need lots of concrete or gravel road, skin, marble, create unique stripes for zebras, for instance for a computer game, or the post-production of a movie and we really don't want to draw miles and miles of these textures by hand. Instead, we give it to an algorithm to continue this small sample where the output should be a bigger version of this pattern with the same characteristics. So, how do we know if we have a good technique at hand? Well, first it must not be repetitive, checkmark, and it has to be free of seams. This part means that we should not be able to see any lines or artifacts that would quickly give the trick away. Now, get this, this new paper attempts to do the same with neural cellular automaton. What an insane idea! We like those around here, so let's give it a try. How? Well, first by trying to expand this simple checkerboard pattern. The algorithm is starting out from random noise and as it evolves, well, this is a disaster. We are looking for squares, but we have quadrilaterals. They are also misaligned, and they are also inconsistent. But luckily, we are not done yet, and now hold on to your papers and observe how the grid cells communicate with each other to improve the result. First, the misalignment is taken care of, then the quadrilaterals become squares, and then the consistency of displacement is improved. And the end result is, look. In this other example, we can not only see these beautiful bubbles grow out of nowhere, but the density of the bubbles remains roughly the same over the process. Look, as two of them get too close to each other, they coalesce or pop. Damage bubbles can also regrow. Very cool. Okay, it can do proper texture synthesis, but so can a ton of other handcrafted computer graphics algorithms. So, why is this interesting? Why bother with this? Well, first, you may think that the result of this technique is the same as other techniques, but it isn't. The output is not necessarily an image, but can be an animation too. Excellent. Here, it was also able to animate the checkerboard pattern, and even better, it can not only reproduce the weave pattern, but the animation part extends to this too. And now comes the even more interesting part. Let's ask why does this output an animation and not an image? The answer lies within these weaving patterns. We just need to carefully observe them. Let's see. Yes, again, we start out from the noise, where some woven patterns emerge, but then it almost looks like a person who started weaving them until it resembles the initial sample. Yes, that is the key. The neural network learned to create not an image, not an animation, but no less than a computer program to accomplish this kind of texture synthesis. How cool is that? So, aren't with all that knowledge, do you remember the regenerating iguana project? Let's try to destroy these textures too, and see if it can use these computer programs to recover and get us a seamless texture. First, we delete parts of the texture, then it fills in the gap with noise, and now let's run that program. Wow! Resilient, self-healing texture synthesis. How cool is that? And in every case, it starts out from a solution that is completely wrong, improves it to be just kind of wrong, and after further improvement, there you go. Fantastic! What a time to be alive! And note that this is a paper in the wonderful distale journal, which not only means that it is excellent, but also interactive. So, you can run many of these experiments yourself right in your web browser. The link is available in the video description. This episode has been supported by weights and biases. In this post, they show you how to debug and compare models by tracking predictions, hyperparameters, GPU usage, and more. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karajone Fahir."}, {"start": 4.76, "end": 11.56, "text": " Today, we are going to play with a cellular automaton and fuse it together with the neural network."}, {"start": 11.56, "end": 13.88, "text": " It will be quite an experience."}, {"start": 13.88, "end": 17.88, "text": " But of course, first, what are these things anyway?"}, {"start": 17.88, "end": 23.68, "text": " You can imagine a cellular automaton as a small game where we have a bunch of cells"}, {"start": 23.68, "end": 30.72, "text": " and a set of simple rules that describe when a cell should be full and when it should be empty."}, {"start": 30.72, "end": 34.36, "text": " What you see here is a popular example called Game of Life,"}, {"start": 34.36, "end": 39.44, "text": " which simulates a tiny world where each cell represents a live form."}, {"start": 39.44, "end": 42.16, "text": " So, why is this so interesting?"}, {"start": 42.16, "end": 47.96, "text": " Well, this cellular automaton shows us that even a small set of simple rules"}, {"start": 47.96, "end": 53.84, "text": " can give rise to remarkably complex live forms such as gliders, spaceships,"}, {"start": 53.84, "end": 61.08, "text": " and even John von Neumann's Universal Constructor or in other words, self-replicating machines."}, {"start": 61.08, "end": 69.0, "text": " Now, this gets more interesting, a later paper fused the cellular automaton with a neural network."}, {"start": 69.0, "end": 74.76, "text": " It was tasked to grow and even better maintain a prescribed shape."}, {"start": 74.76, "end": 79.08000000000001, "text": " Remember these two words, grow and maintain shape."}, {"start": 79.08000000000001, "end": 82.96000000000001, "text": " And the question was if it can recover from undesirable states,"}, {"start": 82.96000000000001, "end": 86.56, "text": " can it perhaps regenerate when damaged?"}, {"start": 86.56, "end": 92.48, "text": " Well, here you will see all kinds of damage and then this happens."}, {"start": 92.48, "end": 93.72, "text": " Nice!"}, {"start": 93.72, "end": 100.56, "text": " The best part is that this thing wasn't even trained to be able to perform this kind of regeneration."}, {"start": 100.56, "end": 105.52, "text": " The objective for training was that it should be able to perform its task of growing"}, {"start": 105.52, "end": 112.56, "text": " and maintaining shape and it turns out some sort of regeneration is included in that."}, {"start": 112.56, "end": 118.24000000000001, "text": " This sounds very promising and I wonder if we can apply this concept to something"}, {"start": 118.24000000000001, "end": 120.76, "text": " where healing is instrumental."}, {"start": 120.76, "end": 123.68, "text": " Are there such applications in computer science?"}, {"start": 123.68, "end": 127.0, "text": " If so, what could those be?"}, {"start": 127.0, "end": 129.16, "text": " Oh yes, yes there are."}, {"start": 129.16, "end": 132.56, "text": " For instance, think about texture synthesis."}, {"start": 132.56, "end": 136.96, "text": " This is a topic that is subject to a great deal of research in computer graphics"}, {"start": 136.96, "end": 140.35999999999999, "text": " and those folks have this down to a science."}, {"start": 140.35999999999999, "end": 142.56, "text": " So, what are we doing here?"}, {"start": 142.56, "end": 149.56, "text": " Texture synthesis typically means that we need lots of concrete or gravel road, skin, marble,"}, {"start": 149.56, "end": 153.96, "text": " create unique stripes for zebras, for instance for a computer game,"}, {"start": 153.96, "end": 161.56, "text": " or the post-production of a movie and we really don't want to draw miles and miles of these textures by hand."}, {"start": 161.56, "end": 167.16, "text": " Instead, we give it to an algorithm to continue this small sample where the output should be"}, {"start": 167.16, "end": 171.96, "text": " a bigger version of this pattern with the same characteristics."}, {"start": 171.96, "end": 175.76000000000002, "text": " So, how do we know if we have a good technique at hand?"}, {"start": 175.76000000000002, "end": 182.56, "text": " Well, first it must not be repetitive, checkmark, and it has to be free of seams."}, {"start": 182.56, "end": 189.56, "text": " This part means that we should not be able to see any lines or artifacts that would quickly give the trick away."}, {"start": 189.56, "end": 197.36, "text": " Now, get this, this new paper attempts to do the same with neural cellular automaton."}, {"start": 197.36, "end": 199.46, "text": " What an insane idea!"}, {"start": 199.46, "end": 203.36, "text": " We like those around here, so let's give it a try."}, {"start": 203.36, "end": 204.36, "text": " How?"}, {"start": 204.36, "end": 209.36, "text": " Well, first by trying to expand this simple checkerboard pattern."}, {"start": 209.36, "end": 214.16000000000003, "text": " The algorithm is starting out from random noise and as it evolves,"}, {"start": 214.16000000000003, "end": 216.76000000000002, "text": " well, this is a disaster."}, {"start": 216.76000000000002, "end": 220.66000000000003, "text": " We are looking for squares, but we have quadrilaterals."}, {"start": 220.66000000000003, "end": 225.96, "text": " They are also misaligned, and they are also inconsistent."}, {"start": 225.96, "end": 236.16000000000003, "text": " But luckily, we are not done yet, and now hold on to your papers and observe how the grid cells communicate with each other to improve the result."}, {"start": 236.16, "end": 242.16, "text": " First, the misalignment is taken care of, then the quadrilaterals become squares,"}, {"start": 242.16, "end": 246.16, "text": " and then the consistency of displacement is improved."}, {"start": 246.16, "end": 249.56, "text": " And the end result is, look."}, {"start": 249.56, "end": 255.46, "text": " In this other example, we can not only see these beautiful bubbles grow out of nowhere,"}, {"start": 255.46, "end": 261.36, "text": " but the density of the bubbles remains roughly the same over the process."}, {"start": 261.36, "end": 267.36, "text": " Look, as two of them get too close to each other, they coalesce or pop."}, {"start": 267.36, "end": 270.36, "text": " Damage bubbles can also regrow."}, {"start": 270.36, "end": 272.16, "text": " Very cool."}, {"start": 272.16, "end": 279.56, "text": " Okay, it can do proper texture synthesis, but so can a ton of other handcrafted computer graphics algorithms."}, {"start": 279.56, "end": 282.16, "text": " So, why is this interesting?"}, {"start": 282.16, "end": 283.86, "text": " Why bother with this?"}, {"start": 283.86, "end": 289.86, "text": " Well, first, you may think that the result of this technique is the same as other techniques,"}, {"start": 289.86, "end": 291.56, "text": " but it isn't."}, {"start": 291.56, "end": 297.36, "text": " The output is not necessarily an image, but can be an animation too."}, {"start": 297.36, "end": 298.36, "text": " Excellent."}, {"start": 298.36, "end": 304.16, "text": " Here, it was also able to animate the checkerboard pattern, and even better,"}, {"start": 304.16, "end": 310.56, "text": " it can not only reproduce the weave pattern, but the animation part extends to this too."}, {"start": 310.56, "end": 313.46000000000004, "text": " And now comes the even more interesting part."}, {"start": 313.46000000000004, "end": 318.56, "text": " Let's ask why does this output an animation and not an image?"}, {"start": 318.56, "end": 321.56, "text": " The answer lies within these weaving patterns."}, {"start": 321.56, "end": 324.76, "text": " We just need to carefully observe them."}, {"start": 324.76, "end": 325.96, "text": " Let's see."}, {"start": 325.96, "end": 331.26, "text": " Yes, again, we start out from the noise, where some woven patterns emerge,"}, {"start": 331.26, "end": 335.46, "text": " but then it almost looks like a person who started weaving them"}, {"start": 335.46, "end": 338.26, "text": " until it resembles the initial sample."}, {"start": 338.26, "end": 340.46, "text": " Yes, that is the key."}, {"start": 340.46, "end": 345.16, "text": " The neural network learned to create not an image, not an animation,"}, {"start": 345.16, "end": 350.96000000000004, "text": " but no less than a computer program to accomplish this kind of texture synthesis."}, {"start": 350.96000000000004, "end": 353.16, "text": " How cool is that?"}, {"start": 353.16, "end": 359.56, "text": " So, aren't with all that knowledge, do you remember the regenerating iguana project?"}, {"start": 359.56, "end": 366.36, "text": " Let's try to destroy these textures too, and see if it can use these computer programs to recover"}, {"start": 366.36, "end": 369.56, "text": " and get us a seamless texture."}, {"start": 369.56, "end": 372.26000000000005, "text": " First, we delete parts of the texture,"}, {"start": 372.26, "end": 380.96, "text": " then it fills in the gap with noise, and now let's run that program."}, {"start": 380.96, "end": 381.96, "text": " Wow!"}, {"start": 381.96, "end": 385.56, "text": " Resilient, self-healing texture synthesis."}, {"start": 385.56, "end": 387.56, "text": " How cool is that?"}, {"start": 387.56, "end": 392.76, "text": " And in every case, it starts out from a solution that is completely wrong,"}, {"start": 392.76, "end": 398.56, "text": " improves it to be just kind of wrong, and after further improvement,"}, {"start": 398.56, "end": 399.65999999999997, "text": " there you go."}, {"start": 399.65999999999997, "end": 401.06, "text": " Fantastic!"}, {"start": 401.06, "end": 403.26, "text": " What a time to be alive!"}, {"start": 403.26, "end": 407.76, "text": " And note that this is a paper in the wonderful distale journal,"}, {"start": 407.76, "end": 412.96, "text": " which not only means that it is excellent, but also interactive."}, {"start": 412.96, "end": 417.66, "text": " So, you can run many of these experiments yourself right in your web browser."}, {"start": 417.66, "end": 420.46, "text": " The link is available in the video description."}, {"start": 420.46, "end": 423.66, "text": " This episode has been supported by weights and biases."}, {"start": 423.66, "end": 427.26, "text": " In this post, they show you how to debug and compare models"}, {"start": 427.26, "end": 432.26, "text": " by tracking predictions, hyperparameters, GPU usage, and more."}, {"start": 432.26, "end": 435.36, "text": " If you work with learning algorithms on a regular basis,"}, {"start": 435.36, "end": 438.06, "text": " make sure to check out weights and biases."}, {"start": 438.06, "end": 441.36, "text": " Their system is designed to help you organize your experiments,"}, {"start": 441.36, "end": 446.65999999999997, "text": " and it is so good it could shave off weeks or even months of work from your projects,"}, {"start": 446.65999999999997, "end": 452.26, "text": " and is completely free for all individuals, academics, and open source projects."}, {"start": 452.26, "end": 454.65999999999997, "text": " This really is as good as it gets,"}, {"start": 454.66, "end": 460.76000000000005, "text": " and it is hardly a surprise that they are now used by over 200 companies and research institutions."}, {"start": 460.76000000000005, "end": 464.86, "text": " Make sure to visit them through wnb.com slash papers,"}, {"start": 464.86, "end": 469.46000000000004, "text": " or just click the link in the video description, and you can get a free demo today."}, {"start": 469.46000000000004, "end": 472.76000000000005, "text": " Our thanks to weights and biases for their longstanding support,"}, {"start": 472.76000000000005, "end": 475.76000000000005, "text": " and for helping us make better videos for you."}, {"start": 475.76, "end": 485.76, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=22Sojtv4gbg
Intel's Video Game Looks Like Reality! 🌴
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Enhancing Photorealism Enhancement" is available here: https://isl-org.github.io/PhotorealismEnhancement/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #vr #metaverse #intel
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. This paper is called Enhancing Photorealism Enhancement. Hmm, let's try to unpack what that exactly means. This means that we take video footage from a game, for instance GTA V, which is an action game where the city we can play in was modeled after real places in California. Now, as we're living the advent of neural network-based learning algorithms, we have a ton of training data at our disposal on the internet. For instance, the city's capes dataset contains images and videos taken in 50 real cities, and it also contains annotations that describe which object is which. And the authors of this paper looked at this and had an absolutely insane idea. And the idea is, let's learn on the city's capes dataset what cars, cities, and architecture looks like, then take a piece of video footage from the game and translate it into a real movie. So, basically something that is impossible. That is an insane idea, and when I read this paper, I thought that cannot possibly work in any case, but especially not given that the game takes place in California, and the city's capes dataset contains mostly footage of German cities. How would a learning algorithm pull that off? There is no way this will work. Now, there are previous techniques that attempted this. Here you see a few of them. And, well, the realism is just not there, and there was an even bigger issue. And that is the lack of temporal coherence. This is the flickering that you see where the AI processes these images independently, and does not do that consistently. This quickly breaks the immersion and is typically a deal breaker. And now, hold on to your papers and let's have a look together at the new technique. Whoa! This is nothing like the previous ones. It renders the exact same place, the exact same cars, and the badges are still correct, and still refer to real-world brands. And that's not even the best part. Look! The car-peat materials are significantly more realistic, something that is really difficult to capture in a real-time rendering engine. Lots of realistic-looking, specular highlights off of something that feels like the real geometry of the car. Wow! Now, as you see, most of the generated photorealistic images are dimmer and less saturated than the video game graphics. Why is that? This is because computer game engines often create a more stylized world, where the saturation, highs, and bloom effects are often more pronounced. Let's try to fight these bias, where many people consider the more saturated images to be better, and focus our attention to the realism in these image pairs. While we are there, for reference, we can have a look at what the output would be if we didn't do any of the photorealistic magic, but instead we just tried to breathe more life into the video game footage by trying to transfer the color schemes from these real-world videos in the training set. So, only color transfer. Let's see, yes, that helps, until we compare the results with the photorealistic images synthesized by this new AI. Look! The trees don't look nearly as realistic as the new method, and after we see the real roads, it's hard to settle for the synthetic ones from the game. However, no one said that CTscape is the only dataset we can use for this method. In fact, if we still find ourselves yearning for that saturated look, we can try to plug in a more stylized dataset and get this. This is fantastic, because these images don't have many of the limitations of computer graphics rendering systems. Why is that? Because look at the grass here. In the game, it looks like a 2D texture to save resources and be able to render an image quicker. However, the new system can put more real-looking grass in there, which is a fully 3D object where every single blade of grass is considered. The most mind-blowing thing here is that this AI has finally enough generalization capabilities to learn about cities in Germany and still be able to make convincing photorealistic images for California. The algorithm never saw California, and yet it can recreate it from video game footage better than I ever imagined would be possible. That is mind-blowing. Unreal. And if you have been holding onto your paper so far, now squeeze that paper, because here we have one of those rare cases where we squeeze our papers for not a feature, but for a limitation of sorts. You see, there are limits to this technique too. For instance, since the AI was trained on the beautiful lush hills of Germany and Austria, it hasn't really seen the dry hills of LA. So, what does it do with them? Look, it redrew the hills the only way it saw hills exist, which is with trees. Now we can think of this as a limitation, but also as an opportunity. Just imagine the amazing artistic effects we could achieve by playing this trick to our advantage. Also, we don't need to create an 80% photorealistic game like this one and push it up to 100% with the AI. We could draw not 80% but the bare minimum, maybe only 20% for the video game, of course, draft, if you will, and let the AI do the heavy lifting. Imagine how much modeling time we could save for artists as well. I love this. What a time to be alive! Now all of this only makes sense for real world use if it can run quickly. So, can it? How long do we have to wait to get such a photorealistic video? Do we have to wait for minutes to hours? No, the whole thing runs interactively, which means that it is already usable, we can plug this into the game as a post processing step. And remember the first law of papers, which says that two more papers down the line and it will be even better. What improvements do you expect to happen soon? And what would you use this for? Let me know in the comments below. PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptiLabs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.88, "end": 9.6, "text": " This paper is called Enhancing Photorealism Enhancement."}, {"start": 9.6, "end": 14.0, "text": " Hmm, let's try to unpack what that exactly means."}, {"start": 14.0, "end": 19.6, "text": " This means that we take video footage from a game, for instance GTA V,"}, {"start": 19.6, "end": 26.400000000000002, "text": " which is an action game where the city we can play in was modeled after real places in California."}, {"start": 26.4, "end": 31.119999999999997, "text": " Now, as we're living the advent of neural network-based learning algorithms,"}, {"start": 31.119999999999997, "end": 35.44, "text": " we have a ton of training data at our disposal on the internet."}, {"start": 35.44, "end": 42.08, "text": " For instance, the city's capes dataset contains images and videos taken in 50 real cities,"}, {"start": 42.08, "end": 46.879999999999995, "text": " and it also contains annotations that describe which object is which."}, {"start": 46.879999999999995, "end": 52.8, "text": " And the authors of this paper looked at this and had an absolutely insane idea."}, {"start": 52.8, "end": 58.8, "text": " And the idea is, let's learn on the city's capes dataset what cars, cities,"}, {"start": 58.8, "end": 64.39999999999999, "text": " and architecture looks like, then take a piece of video footage from the game"}, {"start": 64.39999999999999, "end": 67.6, "text": " and translate it into a real movie."}, {"start": 67.6, "end": 71.36, "text": " So, basically something that is impossible."}, {"start": 71.36, "end": 78.8, "text": " That is an insane idea, and when I read this paper, I thought that cannot possibly work in any case,"}, {"start": 78.8, "end": 83.67999999999999, "text": " but especially not given that the game takes place in California,"}, {"start": 83.67999999999999, "end": 88.0, "text": " and the city's capes dataset contains mostly footage of German cities."}, {"start": 88.0, "end": 91.03999999999999, "text": " How would a learning algorithm pull that off?"}, {"start": 91.03999999999999, "end": 92.72, "text": " There is no way this will work."}, {"start": 93.75999999999999, "end": 96.47999999999999, "text": " Now, there are previous techniques that attempted this."}, {"start": 97.03999999999999, "end": 98.56, "text": " Here you see a few of them."}, {"start": 99.67999999999999, "end": 105.36, "text": " And, well, the realism is just not there, and there was an even bigger issue."}, {"start": 105.36, "end": 108.64, "text": " And that is the lack of temporal coherence."}, {"start": 108.64, "end": 114.24, "text": " This is the flickering that you see where the AI processes these images independently,"}, {"start": 114.24, "end": 116.8, "text": " and does not do that consistently."}, {"start": 116.8, "end": 121.28, "text": " This quickly breaks the immersion and is typically a deal breaker."}, {"start": 121.28, "end": 126.64, "text": " And now, hold on to your papers and let's have a look together at the new technique."}, {"start": 126.64, "end": 128.56, "text": " Whoa!"}, {"start": 128.56, "end": 131.92000000000002, "text": " This is nothing like the previous ones."}, {"start": 131.92, "end": 136.72, "text": " It renders the exact same place, the exact same cars,"}, {"start": 136.72, "end": 140.79999999999998, "text": " and the badges are still correct, and still refer to real-world brands."}, {"start": 141.76, "end": 144.16, "text": " And that's not even the best part."}, {"start": 144.16, "end": 144.39999999999998, "text": " Look!"}, {"start": 145.2, "end": 148.72, "text": " The car-peat materials are significantly more realistic,"}, {"start": 149.27999999999997, "end": 153.51999999999998, "text": " something that is really difficult to capture in a real-time rendering engine."}, {"start": 154.23999999999998, "end": 160.56, "text": " Lots of realistic-looking, specular highlights off of something that feels like the real geometry"}, {"start": 160.56, "end": 162.56, "text": " of the car. Wow!"}, {"start": 163.44, "end": 169.76, "text": " Now, as you see, most of the generated photorealistic images are dimmer and less saturated"}, {"start": 169.76, "end": 171.12, "text": " than the video game graphics."}, {"start": 171.84, "end": 172.72, "text": " Why is that?"}, {"start": 173.52, "end": 178.4, "text": " This is because computer game engines often create a more stylized world,"}, {"start": 178.4, "end": 183.44, "text": " where the saturation, highs, and bloom effects are often more pronounced."}, {"start": 183.44, "end": 190.0, "text": " Let's try to fight these bias, where many people consider the more saturated images to be better,"}, {"start": 190.0, "end": 194.64, "text": " and focus our attention to the realism in these image pairs."}, {"start": 194.64, "end": 200.07999999999998, "text": " While we are there, for reference, we can have a look at what the output would be"}, {"start": 200.07999999999998, "end": 203.52, "text": " if we didn't do any of the photorealistic magic,"}, {"start": 203.52, "end": 208.72, "text": " but instead we just tried to breathe more life into the video game footage"}, {"start": 208.72, "end": 214.32, "text": " by trying to transfer the color schemes from these real-world videos in the training set."}, {"start": 215.44, "end": 217.28, "text": " So, only color transfer."}, {"start": 218.07999999999998, "end": 225.92, "text": " Let's see, yes, that helps, until we compare the results with the photorealistic images synthesized"}, {"start": 225.92, "end": 232.4, "text": " by this new AI. Look! The trees don't look nearly as realistic as the new method,"}, {"start": 232.4, "end": 238.08, "text": " and after we see the real roads, it's hard to settle for the synthetic ones from the game."}, {"start": 238.8, "end": 244.0, "text": " However, no one said that CTscape is the only dataset we can use for this method."}, {"start": 244.56, "end": 249.36, "text": " In fact, if we still find ourselves yearning for that saturated look,"}, {"start": 249.36, "end": 254.0, "text": " we can try to plug in a more stylized dataset and get this."}, {"start": 255.12, "end": 261.04, "text": " This is fantastic, because these images don't have many of the limitations of computer graphics"}, {"start": 261.04, "end": 267.52000000000004, "text": " rendering systems. Why is that? Because look at the grass here. In the game,"}, {"start": 267.52000000000004, "end": 273.12, "text": " it looks like a 2D texture to save resources and be able to render an image quicker."}, {"start": 273.68, "end": 277.92, "text": " However, the new system can put more real-looking grass in there,"}, {"start": 277.92, "end": 282.96000000000004, "text": " which is a fully 3D object where every single blade of grass is considered."}, {"start": 283.76, "end": 290.0, "text": " The most mind-blowing thing here is that this AI has finally enough generalization capabilities"}, {"start": 290.0, "end": 297.2, "text": " to learn about cities in Germany and still be able to make convincing photorealistic images for"}, {"start": 297.2, "end": 304.4, "text": " California. The algorithm never saw California, and yet it can recreate it from video game footage"}, {"start": 304.4, "end": 310.32, "text": " better than I ever imagined would be possible. That is mind-blowing. Unreal."}, {"start": 311.2, "end": 316.32, "text": " And if you have been holding onto your paper so far, now squeeze that paper,"}, {"start": 316.32, "end": 322.24, "text": " because here we have one of those rare cases where we squeeze our papers for not a feature,"}, {"start": 322.24, "end": 327.28, "text": " but for a limitation of sorts. You see, there are limits to this technique too."}, {"start": 327.76, "end": 334.08, "text": " For instance, since the AI was trained on the beautiful lush hills of Germany and Austria,"}, {"start": 334.08, "end": 339.52, "text": " it hasn't really seen the dry hills of LA. So, what does it do with them?"}, {"start": 339.52, "end": 347.84, "text": " Look, it redrew the hills the only way it saw hills exist, which is with trees."}, {"start": 348.79999999999995, "end": 354.15999999999997, "text": " Now we can think of this as a limitation, but also as an opportunity."}, {"start": 354.88, "end": 360.56, "text": " Just imagine the amazing artistic effects we could achieve by playing this trick to our advantage."}, {"start": 361.2, "end": 367.68, "text": " Also, we don't need to create an 80% photorealistic game like this one and push it up to 100%"}, {"start": 367.68, "end": 376.48, "text": " with the AI. We could draw not 80% but the bare minimum, maybe only 20% for the video game,"}, {"start": 376.48, "end": 383.6, "text": " of course, draft, if you will, and let the AI do the heavy lifting. Imagine how much modeling time"}, {"start": 383.6, "end": 391.84000000000003, "text": " we could save for artists as well. I love this. What a time to be alive! Now all of this only"}, {"start": 391.84, "end": 399.91999999999996, "text": " makes sense for real world use if it can run quickly. So, can it? How long do we have to wait to get"}, {"start": 399.91999999999996, "end": 407.44, "text": " such a photorealistic video? Do we have to wait for minutes to hours? No, the whole thing runs"}, {"start": 407.44, "end": 413.35999999999996, "text": " interactively, which means that it is already usable, we can plug this into the game as a post"}, {"start": 413.35999999999996, "end": 419.2, "text": " processing step. And remember the first law of papers, which says that two more papers down the"}, {"start": 419.2, "end": 426.64, "text": " line and it will be even better. What improvements do you expect to happen soon? And what would you"}, {"start": 426.64, "end": 432.71999999999997, "text": " use this for? Let me know in the comments below. PerceptiLabs is a visual API for TensorFlow"}, {"start": 432.71999999999997, "end": 438.8, "text": " carefully designed to make machine learning as intuitive as possible. This gives you a faster way"}, {"start": 438.8, "end": 444.88, "text": " to build out models with more transparency into how your model is architected, how it performs,"}, {"start": 444.88, "end": 451.04, "text": " and how to debug it. Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 451.04, "end": 456.4, "text": " It even generates visualizations for all the model variables and gives you recommendations"}, {"start": 456.4, "end": 463.28, "text": " both during modeling and training and does all this automatically. I only wish I had a tool like this"}, {"start": 463.28, "end": 469.76, "text": " when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers"}, {"start": 469.76, "end": 475.2, "text": " to easily install the free local version of their system today. Our thanks to perceptiLabs for"}, {"start": 475.2, "end": 480.15999999999997, "text": " their support and for helping us make better videos for you. Thanks for watching and for your"}, {"start": 480.16, "end": 508.32000000000005, "text": " generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=g7bEUB8aLvM
Can We Teach Physics To A DeepMind's AI? ⚛
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "Learning mesh-based simulation with Graph Networks" is available here: https://arxiv.org/abs/2010.03409 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajona Ifeher. If you have been watching this series for a while, you know very well that I love learning algorithms and fluid simulations. But do you know what I like even better? Learning algorithms applied to fluid simulations, so I couldn't be happier with today's paper. We can create wondrous fluid simulations like the ones you see here by studying the laws of fluid motion from physics and writing a computer program that contains these laws. However, I just mentioned learning algorithms. How do these even come to the picture? If we can write a program that simulates the laws, why would we need learning-based algorithms? This doesn't seem to make any sense. You see, in this task, neural networks are applied to solve something that we already know how to solve. However, if we use a neural network to perform this task, we have to train it, which is a long and arduous process. I hope to have convinced you that this is a bad, bad idea. Why would anyone bother to do that? Does this make any sense? Well, it does make a lot of sense. And the reason for that is that this training step only has to be done once, and afterwards, carrying the neural network, that is, predicting what happens next in the simulation runs almost immediately. This takes way less time than calculating all the forces and pressures in the simulation while retaining high-quality results. This earlier work from last year absolutely nailed this problem. Look, this is a scene with the boxes it has been trained on. And now, let's ask it to try to simulate the evolution of significantly different shapes. Wow! It not only does well with these previously unseen shapes, but it also handles their interactions really well. But, there was more. We could also train it on a tiny domain with only a few particles, and then it was able to learn general concepts that we can reuse to simulate a much bigger domain, and also with more particles. Fantastic! This was a simple general model that truly is a force to be reckoned with. Now, this is a great leap in neural network-based physics simulations, but of course, not everything was perfect there. For instance, over longer timeframes, solids became incorrectly deformed. And now, a newer iteration of a similar system just came out from DeepMind's research lab that promises to extend these neural networks for an incredible set of use cases. Aerodynamics, structural mechanics, class simulations, and more. I am very excited to see how far they have come since. So, let's see how well it does first with rollouts, then with generalization experiments. Here is the first rollout experiment, so what does that mean, and what are we seeing here? On the left, you see a verified handcrafted algorithm performing the simulation, we will accept this as the true data, and on the right, the AI is trying to continue the initial simulation. But, there is one problem. And that problem is that the AI was only trained on short simulations with 400 timestabs that's only a few seconds. And unfortunately, this test will be 100 times longer. So, it only learned on short simulations can it manage to run a longer one and remain stable. Well, that will be tough, but so far so good. Still running. My goodness, this is really something. Still running, and it's very close to the ground truth. Okay, that is fantastic, but that was just a piece of cloth. What about interactions with other objects? Well, let's see, I'll stop the process here and there so we can inspect the differences. Again, flying colors, loving it. And apparently, the same can be said for simulations, structural mechanics, and incompressible fluid dynamics. Now that is one more important lesson here, to be able to solve such a wide variety of simulation problems, we need a bunch of different handcrafted algorithms that talk many, many years to develop. But, this one neural network can learn and perform them all, and it can do it 10 to 100 times quicker. And now comes the second half, generalization experiments. This means a simulation scenario which shapes that the algorithm has never seen before. And let's see if it obtained general knowledge of the underlying laws of physics to be able to pull this off. Oh my, look at that. Even the tiny piece that is hanging off of the flag is simulated nearly perfectly. In this one, they gave it different wind speeds and directions that it hadn't seen before, and not only that, but we are varying these parameters in time, and it doesn't even break a sweat. And hold onto your papers, because here comes my favorite. It can even learn on a small scale simulation with a simple rectangular flag. And now we throw at it a much more detailed cylindrical flag with tassels. Surely this will be way beyond what any learning algorithm can do today. And okay, come on. I am truly out of words. Look, so now this is official. We can ask an AI to perform something that we already know how to do, and it will not only be able to reproduce similar simulations, but we can even ask things that were previously quite unreasonably outside of what it had seen, and it handles all these with flying colors. And it does this much better than previous techniques were able to. And it can learn from multiple different algorithms at the same time. Wow! What a time to be alive! This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers, or just click the link in the video description. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajona Ifeher."}, {"start": 4.88, "end": 9.36, "text": " If you have been watching this series for a while, you know very well that I love learning"}, {"start": 9.36, "end": 12.84, "text": " algorithms and fluid simulations."}, {"start": 12.84, "end": 16.080000000000002, "text": " But do you know what I like even better?"}, {"start": 16.080000000000002, "end": 22.8, "text": " Learning algorithms applied to fluid simulations, so I couldn't be happier with today's paper."}, {"start": 22.8, "end": 28.0, "text": " We can create wondrous fluid simulations like the ones you see here by studying the laws"}, {"start": 28.0, "end": 34.480000000000004, "text": " of fluid motion from physics and writing a computer program that contains these laws."}, {"start": 34.480000000000004, "end": 38.480000000000004, "text": " However, I just mentioned learning algorithms."}, {"start": 38.480000000000004, "end": 41.28, "text": " How do these even come to the picture?"}, {"start": 41.28, "end": 47.68, "text": " If we can write a program that simulates the laws, why would we need learning-based algorithms?"}, {"start": 47.68, "end": 50.24, "text": " This doesn't seem to make any sense."}, {"start": 50.24, "end": 55.6, "text": " You see, in this task, neural networks are applied to solve something that we already"}, {"start": 55.6, "end": 57.8, "text": " know how to solve."}, {"start": 57.8, "end": 63.4, "text": " However, if we use a neural network to perform this task, we have to train it, which is"}, {"start": 63.4, "end": 65.92, "text": " a long and arduous process."}, {"start": 65.92, "end": 70.24, "text": " I hope to have convinced you that this is a bad, bad idea."}, {"start": 70.24, "end": 73.12, "text": " Why would anyone bother to do that?"}, {"start": 73.12, "end": 75.03999999999999, "text": " Does this make any sense?"}, {"start": 75.03999999999999, "end": 78.0, "text": " Well, it does make a lot of sense."}, {"start": 78.0, "end": 84.24, "text": " And the reason for that is that this training step only has to be done once, and afterwards,"}, {"start": 84.24, "end": 90.67999999999999, "text": " carrying the neural network, that is, predicting what happens next in the simulation runs almost"}, {"start": 90.67999999999999, "end": 92.16, "text": " immediately."}, {"start": 92.16, "end": 97.8, "text": " This takes way less time than calculating all the forces and pressures in the simulation"}, {"start": 97.8, "end": 100.72, "text": " while retaining high-quality results."}, {"start": 100.72, "end": 105.08, "text": " This earlier work from last year absolutely nailed this problem."}, {"start": 105.08, "end": 109.72, "text": " Look, this is a scene with the boxes it has been trained on."}, {"start": 109.72, "end": 116.6, "text": " And now, let's ask it to try to simulate the evolution of significantly different shapes."}, {"start": 116.6, "end": 119.4, "text": " Wow!"}, {"start": 119.4, "end": 124.84, "text": " It not only does well with these previously unseen shapes, but it also handles their interactions"}, {"start": 124.84, "end": 126.0, "text": " really well."}, {"start": 126.0, "end": 128.32, "text": " But, there was more."}, {"start": 128.32, "end": 134.88, "text": " We could also train it on a tiny domain with only a few particles, and then it was able"}, {"start": 134.88, "end": 141.0, "text": " to learn general concepts that we can reuse to simulate a much bigger domain, and also"}, {"start": 141.0, "end": 143.44, "text": " with more particles."}, {"start": 143.44, "end": 144.96, "text": " Fantastic!"}, {"start": 144.96, "end": 150.51999999999998, "text": " This was a simple general model that truly is a force to be reckoned with."}, {"start": 150.51999999999998, "end": 156.51999999999998, "text": " Now, this is a great leap in neural network-based physics simulations, but of course, not everything"}, {"start": 156.51999999999998, "end": 158.16, "text": " was perfect there."}, {"start": 158.16, "end": 163.8, "text": " For instance, over longer timeframes, solids became incorrectly deformed."}, {"start": 163.8, "end": 169.76000000000002, "text": " And now, a newer iteration of a similar system just came out from DeepMind's research lab"}, {"start": 169.76000000000002, "end": 175.76000000000002, "text": " that promises to extend these neural networks for an incredible set of use cases."}, {"start": 175.76000000000002, "end": 182.92000000000002, "text": " Aerodynamics, structural mechanics, class simulations, and more."}, {"start": 182.92000000000002, "end": 186.8, "text": " I am very excited to see how far they have come since."}, {"start": 186.8, "end": 193.60000000000002, "text": " So, let's see how well it does first with rollouts, then with generalization experiments."}, {"start": 193.6, "end": 200.96, "text": " Here is the first rollout experiment, so what does that mean, and what are we seeing here?"}, {"start": 200.96, "end": 206.72, "text": " On the left, you see a verified handcrafted algorithm performing the simulation, we will"}, {"start": 206.72, "end": 214.2, "text": " accept this as the true data, and on the right, the AI is trying to continue the initial simulation."}, {"start": 214.2, "end": 217.04, "text": " But, there is one problem."}, {"start": 217.04, "end": 223.76, "text": " And that problem is that the AI was only trained on short simulations with 400 timestabs"}, {"start": 223.76, "end": 226.44, "text": " that's only a few seconds."}, {"start": 226.44, "end": 230.72, "text": " And unfortunately, this test will be 100 times longer."}, {"start": 230.72, "end": 237.68, "text": " So, it only learned on short simulations can it manage to run a longer one and remain"}, {"start": 237.68, "end": 238.68, "text": " stable."}, {"start": 238.68, "end": 244.28, "text": " Well, that will be tough, but so far so good."}, {"start": 244.28, "end": 246.04, "text": " Still running."}, {"start": 246.04, "end": 249.04, "text": " My goodness, this is really something."}, {"start": 249.04, "end": 252.79999999999998, "text": " Still running, and it's very close to the ground truth."}, {"start": 252.79999999999998, "end": 258.92, "text": " Okay, that is fantastic, but that was just a piece of cloth."}, {"start": 258.92, "end": 261.68, "text": " What about interactions with other objects?"}, {"start": 261.68, "end": 268.24, "text": " Well, let's see, I'll stop the process here and there so we can inspect the differences."}, {"start": 268.24, "end": 272.92, "text": " Again, flying colors, loving it."}, {"start": 272.92, "end": 279.56, "text": " And apparently, the same can be said for simulations, structural mechanics, and incompressible fluid"}, {"start": 279.56, "end": 280.88, "text": " dynamics."}, {"start": 280.88, "end": 286.32, "text": " Now that is one more important lesson here, to be able to solve such a wide variety of"}, {"start": 286.32, "end": 292.48, "text": " simulation problems, we need a bunch of different handcrafted algorithms that talk many, many"}, {"start": 292.48, "end": 293.92, "text": " years to develop."}, {"start": 293.92, "end": 301.56, "text": " But, this one neural network can learn and perform them all, and it can do it 10 to 100 times"}, {"start": 301.56, "end": 303.2, "text": " quicker."}, {"start": 303.2, "end": 307.44, "text": " And now comes the second half, generalization experiments."}, {"start": 307.44, "end": 313.28000000000003, "text": " This means a simulation scenario which shapes that the algorithm has never seen before."}, {"start": 313.28000000000003, "end": 318.72, "text": " And let's see if it obtained general knowledge of the underlying laws of physics to be able"}, {"start": 318.72, "end": 320.2, "text": " to pull this off."}, {"start": 320.2, "end": 324.16, "text": " Oh my, look at that."}, {"start": 324.16, "end": 330.36, "text": " Even the tiny piece that is hanging off of the flag is simulated nearly perfectly."}, {"start": 330.36, "end": 335.84000000000003, "text": " In this one, they gave it different wind speeds and directions that it hadn't seen before,"}, {"start": 335.84000000000003, "end": 341.0, "text": " and not only that, but we are varying these parameters in time, and it doesn't even break"}, {"start": 341.0, "end": 342.84000000000003, "text": " a sweat."}, {"start": 342.84000000000003, "end": 346.6, "text": " And hold onto your papers, because here comes my favorite."}, {"start": 346.6, "end": 353.56, "text": " It can even learn on a small scale simulation with a simple rectangular flag."}, {"start": 353.56, "end": 359.64, "text": " And now we throw at it a much more detailed cylindrical flag with tassels."}, {"start": 359.64, "end": 364.84, "text": " Surely this will be way beyond what any learning algorithm can do today."}, {"start": 364.84, "end": 368.28, "text": " And okay, come on."}, {"start": 368.28, "end": 371.03999999999996, "text": " I am truly out of words."}, {"start": 371.03999999999996, "end": 374.64, "text": " Look, so now this is official."}, {"start": 374.64, "end": 380.44, "text": " We can ask an AI to perform something that we already know how to do, and it will not"}, {"start": 380.44, "end": 386.88, "text": " only be able to reproduce similar simulations, but we can even ask things that were previously"}, {"start": 386.88, "end": 393.76, "text": " quite unreasonably outside of what it had seen, and it handles all these with flying colors."}, {"start": 393.76, "end": 398.44, "text": " And it does this much better than previous techniques were able to."}, {"start": 398.44, "end": 403.12, "text": " And it can learn from multiple different algorithms at the same time."}, {"start": 403.12, "end": 404.64, "text": " Wow!"}, {"start": 404.64, "end": 406.6, "text": " What a time to be alive!"}, {"start": 406.6, "end": 409.76, "text": " This video has been supported by weights and biases."}, {"start": 409.76, "end": 414.88, "text": " Check out the recent offering fully connected, a place where they bring machine learning"}, {"start": 414.88, "end": 421.68, "text": " practitioners together to share and discuss their ideas, learn from industry leaders, and"}, {"start": 421.68, "end": 424.44, "text": " even collaborate on projects together."}, {"start": 424.44, "end": 429.48, "text": " You see, I get messages from you fellow scholars telling me that you have been inspired by the"}, {"start": 429.48, "end": 433.84, "text": " series, but don't really know where to start."}, {"start": 433.84, "end": 435.36, "text": " And here it is."}, {"start": 435.36, "end": 441.08, "text": " Fully connected is a great way to learn about the fundamentals, how to reproduce experiments,"}, {"start": 441.08, "end": 444.84, "text": " get your papers accepted to a conference, and more."}, {"start": 444.84, "end": 451.03999999999996, "text": " Make sure to visit them through wnb.me slash papers, or just click the link in the video"}, {"start": 451.03999999999996, "end": 452.03999999999996, "text": " description."}, {"start": 452.03999999999996, "end": 457.23999999999995, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make"}, {"start": 457.23999999999995, "end": 458.71999999999997, "text": " better videos for you."}, {"start": 458.72, "end": 487.24, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=LtyvS7NYonw
Beautiful Fluid Simulations...In Just 40 Seconds! 🤯
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk?utm_source=karoly#System-4 📝 The paper "Wave Curves: Simulating Lagrangian water waves on dynamically deforming surfaces" is available here: http://visualcomputing.ist.ac.at/publications/2020/WaveCurves/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karojejona Ifehid. Through the power of computer graphics research works, today it is possible to simulate honey coiling, water flow with debris, or to even get a neural network to look at these simulations and learn how to continue them. Now if we look under the hood, we see that not all, but many of these simulations contain particles. And our task is to simulate the pressure, velocity, and other physical quantities for these particles and create a surface where we can watch the evolution of their movement. Once again, the simulations are typically based on particles. But not this new technique? Look, it takes a core simulation. Well, this one is not too exciting. So why are we looking at this? Well, look. Whoa! The new method can add this crispy high frequency details to it. And the result is an absolutely beautiful simulation. And it does not use millions and millions of particles to get this done. In fact, it does not use any particles at all. Instead, it uses wave curves. These are curved shaped wave packets that can enrich a core simulation and improve it a great deal to create a really detailed crisp output. And it gets even better because these wave curves can be applied as a simple post-processing step. What this means is that the workflow that you saw here really works like that. When we have a core simulation that is already done and we are not happy with it, with many other existing methods, it is time to simulate the whole thing again from scratch, but not here. This one we can just add all this detail to an already existing simulation. Wow! I love it! Note that the surface of the fluid is made opaque so that we can get a better view of the waves. Of course, the final simulation that we get for production use are transparent like the one you see here. Now another interesting detail is that the execution time is linear with respect to the curve points. So, what does that mean? Well, let's have a look together. In the first scenario, we get a low quality underlying simulation and we add a hundred thousand wave curves. This takes approximately ten seconds and looks like this. This already greatly enhanced the quality of the results, but we can decide to add more. So first case, a hundred k wave curves in tenies seconds. Now comes the linear part. If we decide that we are yearning for a little more, we can run two hundred k wave curves and the execution time will be twenty-each seconds. It looks like this. Better, we are getting there and for four hundred k wave curves, forty seconds and for eight hundred k curves, yes, you guessed it right, eighty seconds. Double the number of curves, double the execution time. And this is what the linear scaling part means. Now of course, not even this technique is perfect, the post-processing nature of the method means that it can enrich the underlying simulation a great deal, but it cannot add changes that are too intrusive to it. It can only add small waves relative to the size of the fluid domain. But even with these, the value proposition of this paper is just out of this world. So from now on, if we have a relatively poor quality fluid simulation that we abandoned years ago, we don't need to despair what we need is to harness the power of wave curves. This episode has been supported by weights and biases. In this post, they show you how to monitor and optimize your GPU consumption during model training in real time with one line of code. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.96, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karojejona Ifehid."}, {"start": 4.96, "end": 10.16, "text": " Through the power of computer graphics research works, today it is possible to simulate"}, {"start": 10.16, "end": 17.44, "text": " honey coiling, water flow with debris, or to even get a neural network to look at these"}, {"start": 17.44, "end": 21.72, "text": " simulations and learn how to continue them."}, {"start": 21.72, "end": 27.560000000000002, "text": " Now if we look under the hood, we see that not all, but many of these simulations contain"}, {"start": 27.560000000000002, "end": 28.92, "text": " particles."}, {"start": 28.92, "end": 33.96, "text": " And our task is to simulate the pressure, velocity, and other physical quantities for these"}, {"start": 33.96, "end": 39.92, "text": " particles and create a surface where we can watch the evolution of their movement."}, {"start": 39.92, "end": 44.120000000000005, "text": " Once again, the simulations are typically based on particles."}, {"start": 44.120000000000005, "end": 45.96, "text": " But not this new technique?"}, {"start": 45.96, "end": 48.96, "text": " Look, it takes a core simulation."}, {"start": 48.96, "end": 52.400000000000006, "text": " Well, this one is not too exciting."}, {"start": 52.400000000000006, "end": 55.0, "text": " So why are we looking at this?"}, {"start": 55.0, "end": 57.0, "text": " Well, look."}, {"start": 57.0, "end": 58.92, "text": " Whoa!"}, {"start": 58.92, "end": 63.52, "text": " The new method can add this crispy high frequency details to it."}, {"start": 63.52, "end": 67.0, "text": " And the result is an absolutely beautiful simulation."}, {"start": 67.0, "end": 72.08, "text": " And it does not use millions and millions of particles to get this done."}, {"start": 72.08, "end": 76.12, "text": " In fact, it does not use any particles at all."}, {"start": 76.12, "end": 79.16, "text": " Instead, it uses wave curves."}, {"start": 79.16, "end": 84.8, "text": " These are curved shaped wave packets that can enrich a core simulation and improve it"}, {"start": 84.8, "end": 89.24, "text": " a great deal to create a really detailed crisp output."}, {"start": 89.24, "end": 94.64, "text": " And it gets even better because these wave curves can be applied as a simple post-processing"}, {"start": 94.64, "end": 95.72, "text": " step."}, {"start": 95.72, "end": 100.67999999999999, "text": " What this means is that the workflow that you saw here really works like that."}, {"start": 100.67999999999999, "end": 106.03999999999999, "text": " When we have a core simulation that is already done and we are not happy with it, with many"}, {"start": 106.03999999999999, "end": 112.36, "text": " other existing methods, it is time to simulate the whole thing again from scratch, but not"}, {"start": 112.36, "end": 113.36, "text": " here."}, {"start": 113.36, "end": 118.08, "text": " This one we can just add all this detail to an already existing simulation."}, {"start": 118.08, "end": 119.08, "text": " Wow!"}, {"start": 119.08, "end": 120.84, "text": " I love it!"}, {"start": 120.84, "end": 125.84, "text": " Note that the surface of the fluid is made opaque so that we can get a better view of the"}, {"start": 125.84, "end": 126.84, "text": " waves."}, {"start": 126.84, "end": 131.8, "text": " Of course, the final simulation that we get for production use are transparent like the"}, {"start": 131.8, "end": 133.84, "text": " one you see here."}, {"start": 133.84, "end": 139.76, "text": " Now another interesting detail is that the execution time is linear with respect to the curve"}, {"start": 139.76, "end": 140.76, "text": " points."}, {"start": 140.76, "end": 143.32, "text": " So, what does that mean?"}, {"start": 143.32, "end": 145.35999999999999, "text": " Well, let's have a look together."}, {"start": 145.35999999999999, "end": 150.79999999999998, "text": " In the first scenario, we get a low quality underlying simulation and we add a hundred"}, {"start": 150.79999999999998, "end": 153.04, "text": " thousand wave curves."}, {"start": 153.04, "end": 158.0, "text": " This takes approximately ten seconds and looks like this."}, {"start": 158.0, "end": 163.95999999999998, "text": " This already greatly enhanced the quality of the results, but we can decide to add more."}, {"start": 163.95999999999998, "end": 169.35999999999999, "text": " So first case, a hundred k wave curves in tenies seconds."}, {"start": 169.35999999999999, "end": 171.32, "text": " Now comes the linear part."}, {"start": 171.32, "end": 176.92, "text": " If we decide that we are yearning for a little more, we can run two hundred k wave curves"}, {"start": 176.92, "end": 181.12, "text": " and the execution time will be twenty-each seconds."}, {"start": 181.12, "end": 182.64, "text": " It looks like this."}, {"start": 182.64, "end": 189.76, "text": " Better, we are getting there and for four hundred k wave curves, forty seconds and for eight"}, {"start": 189.76, "end": 195.12, "text": " hundred k curves, yes, you guessed it right, eighty seconds."}, {"start": 195.12, "end": 198.48, "text": " Double the number of curves, double the execution time."}, {"start": 198.48, "end": 201.72, "text": " And this is what the linear scaling part means."}, {"start": 201.72, "end": 206.64, "text": " Now of course, not even this technique is perfect, the post-processing nature of the method"}, {"start": 206.64, "end": 213.07999999999998, "text": " means that it can enrich the underlying simulation a great deal, but it cannot add changes that"}, {"start": 213.07999999999998, "end": 215.2, "text": " are too intrusive to it."}, {"start": 215.2, "end": 220.32, "text": " It can only add small waves relative to the size of the fluid domain."}, {"start": 220.32, "end": 225.83999999999997, "text": " But even with these, the value proposition of this paper is just out of this world."}, {"start": 225.84, "end": 231.4, "text": " So from now on, if we have a relatively poor quality fluid simulation that we abandoned"}, {"start": 231.4, "end": 238.04, "text": " years ago, we don't need to despair what we need is to harness the power of wave curves."}, {"start": 238.04, "end": 241.28, "text": " This episode has been supported by weights and biases."}, {"start": 241.28, "end": 246.56, "text": " In this post, they show you how to monitor and optimize your GPU consumption during model"}, {"start": 246.56, "end": 250.48000000000002, "text": " training in real time with one line of code."}, {"start": 250.48000000000002, "end": 255.72, "text": " During my PhD studies, I trained a ton of neural networks which were used in our experiments."}, {"start": 255.72, "end": 260.88, "text": " However, over time, there was just too much data in our repositories and what I am looking"}, {"start": 260.88, "end": 264.28, "text": " for is not data, but insight."}, {"start": 264.28, "end": 269.04, "text": " And that's exactly how weights and biases helps you by organizing your experiments."}, {"start": 269.04, "end": 274.68, "text": " It is used by more than 200 companies and research institutions, including OpenAI, Toyota"}, {"start": 274.68, "end": 277.4, "text": " Research, GitHub, and more."}, {"start": 277.4, "end": 283.28, "text": " And get this, weights and biases is free for all individuals, academics, and open source"}, {"start": 283.28, "end": 284.28, "text": " projects."}, {"start": 284.28, "end": 290.0, "text": " Make sure to visit them through wnba.com slash papers or just click the link in the video"}, {"start": 290.0, "end": 293.28, "text": " description and you can get a free demo today."}, {"start": 293.28, "end": 297.84, "text": " Our thanks to weights and biases for their long-standing support and for helping us make"}, {"start": 297.84, "end": 299.2, "text": " better videos for you."}, {"start": 299.2, "end": 328.76, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=eksOgX3vacs
Meet Your Virtual AI Stuntman! 💪🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills" is available here: https://xbpeng.github.io/projects/DeepMimic/index.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Thumbnail tree image credit: https://pixabay.com/images/id-576847/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to look at a paper from three years ago, and not any kind of paper, but my kind of paper, which is in the intersection of machine learning, computer graphics, and physics simulations. This work zooms in on reproducing reference motions, but with a twist and adds lots of additional features. So, what does all this mean? You see, we are given this virtual character, a reference motion that we wish to teach it, and here, additionally, we are given a task that needs to be done. So, when the reference motion is specified, we place our AI into a physics simulation, where it tries to reproduce these motions. That is a good thing, because if it would try to learn to run by itself alone, it would look something like this. And if we ask it to mimic the reference motion, oh yes, much better. Now that we have built up confidence in this technique, let's think bigger and perform a backflip. Uh-oh, well, that didn't quite work. Why is that? We just established that we can give it a reference motion, and it can learn it by itself. Well, this chap failed to learn a backflip because it explored many motions during training, most of which resulted in failure. So, it didn't find a good solution and settled for a mediocre solution instead. A proposed technique by the name reference state initialization, RSI in short, remedies this issue by letting the agent explore better during the training phase. Got it, so we add this RSI, and now, all is well, right? Let's see, ouch. Not so much. It appears to fall on the ground and tries to continue the motion from there. A plus for effort, little AI, but unfortunately, that's not what we are looking for. So, what is the issue here? The issue is that the agent has hit the ground, and after that, it still tries to score some additional points by continuing to mimic the reference motion. Again, E plus for effort, but this should not give the agent additional scores. This method we just described is called early termination. Let's try it. Now, we add the early termination and RSI together, and let's see if this will do the trick. And yes, finally, with these two additions, it can now perform that sweet, sweet backflip – rolls, and much, much more with flying colors. So now, the agent has the basics down and can even perform explosive dynamic motions as well. So, it is time. Now hold onto your papers as now comes the coolest part, we can perform different kinds of retargeting as well. What is that? Well, one kind is retargeting the environment. This means that we can teach the AI a landing motion in an idealized case, and then ask it to perform the same, but now, off of a tall ledge. Or we can teach it to run, and then drop it into a computer game level and see if it performs well there. And it really does. Amazing. This part is very important because in any reasonable industry use, these characters have to perform in a variety of environments that are different from the training environment. Two is retargeting not the environment, but the body type. We can have different types of characters learn the same motions. This is pretty nice for the Atlas robot, which has a drastically different way distribution, and you can also see that the technique is robust against perturbations. Yes, this means one of the favorite pastimes of a computer graphics researcher, which is throwing boxes at virtual characters and seeing how well it can take it. Might as well make sure of the fact that in a simulated world, we make up all the rules. This one is doing really well. Oh. Note that the Atlas robot is indeed different than the previous model, and these motions can be retargeted to it, however, this is also a humanoid. Can we ask for non-humanoids as well, perhaps? Oh, yes. This technique supports retargeting to T-Rexes, Dragons, Lions, you name it. It can even get used to the gravity of different virtual planets that we dream up. Bravo. So the value proposition of this paper is completely out of this world. When state initialization, early termination, retargeting to different body types, environments, oh my. To have digital applications like computer games use this would already be amazing and just imagine what we could do if we could deploy these to real world robots. And don't forget, these research works just keep on improving every year. The first law of paper says that research is a process. Do not look at where we are, look at where we will be two more papers down the line. Now, fortunately, we can do that right now. Why is that? It is because this paper is from 2018, which means that follow-up papers already exist. What's more, we even discussed one that teaches these agents to not only reproduce these reference motions, but to do those with style. Its style there meant that the agent is allowed to make creative deviations from the reference motion, thus developing its own way of doing it. An amazing improvement. And I wonder what researchers will come up with in the near future. If you have some ideas, let me know in the comments below. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.76, "end": 10.96, "text": " Today, we are going to look at a paper from three years ago, and not any kind of paper,"}, {"start": 10.96, "end": 19.16, "text": " but my kind of paper, which is in the intersection of machine learning, computer graphics, and physics simulations."}, {"start": 19.16, "end": 28.04, "text": " This work zooms in on reproducing reference motions, but with a twist and adds lots of additional features."}, {"start": 28.04, "end": 30.52, "text": " So, what does all this mean?"}, {"start": 30.52, "end": 36.48, "text": " You see, we are given this virtual character, a reference motion that we wish to teach it,"}, {"start": 36.48, "end": 40.84, "text": " and here, additionally, we are given a task that needs to be done."}, {"start": 40.84, "end": 46.56, "text": " So, when the reference motion is specified, we place our AI into a physics simulation,"}, {"start": 46.56, "end": 49.32, "text": " where it tries to reproduce these motions."}, {"start": 49.32, "end": 54.76, "text": " That is a good thing, because if it would try to learn to run by itself alone,"}, {"start": 54.76, "end": 57.879999999999995, "text": " it would look something like this."}, {"start": 57.88, "end": 62.36, "text": " And if we ask it to mimic the reference motion,"}, {"start": 62.36, "end": 65.2, "text": " oh yes, much better."}, {"start": 65.2, "end": 72.44, "text": " Now that we have built up confidence in this technique, let's think bigger and perform a backflip."}, {"start": 72.44, "end": 77.36, "text": " Uh-oh, well, that didn't quite work."}, {"start": 77.36, "end": 78.88, "text": " Why is that?"}, {"start": 78.88, "end": 84.76, "text": " We just established that we can give it a reference motion, and it can learn it by itself."}, {"start": 84.76, "end": 91.24000000000001, "text": " Well, this chap failed to learn a backflip because it explored many motions during training,"}, {"start": 91.24000000000001, "end": 93.72, "text": " most of which resulted in failure."}, {"start": 93.72, "end": 100.04, "text": " So, it didn't find a good solution and settled for a mediocre solution instead."}, {"start": 100.04, "end": 106.08000000000001, "text": " A proposed technique by the name reference state initialization, RSI in short, remedies"}, {"start": 106.08000000000001, "end": 110.84, "text": " this issue by letting the agent explore better during the training phase."}, {"start": 110.84, "end": 117.2, "text": " Got it, so we add this RSI, and now, all is well, right?"}, {"start": 117.2, "end": 120.36, "text": " Let's see, ouch."}, {"start": 120.36, "end": 121.76, "text": " Not so much."}, {"start": 121.76, "end": 126.56, "text": " It appears to fall on the ground and tries to continue the motion from there."}, {"start": 126.56, "end": 131.84, "text": " A plus for effort, little AI, but unfortunately, that's not what we are looking for."}, {"start": 131.84, "end": 134.36, "text": " So, what is the issue here?"}, {"start": 134.36, "end": 140.0, "text": " The issue is that the agent has hit the ground, and after that, it still tries to score some"}, {"start": 140.0, "end": 144.04, "text": " additional points by continuing to mimic the reference motion."}, {"start": 144.04, "end": 150.56, "text": " Again, E plus for effort, but this should not give the agent additional scores."}, {"start": 150.56, "end": 154.08, "text": " This method we just described is called early termination."}, {"start": 154.08, "end": 155.88, "text": " Let's try it."}, {"start": 155.88, "end": 163.2, "text": " Now, we add the early termination and RSI together, and let's see if this will do the trick."}, {"start": 163.2, "end": 169.96, "text": " And yes, finally, with these two additions, it can now perform that sweet, sweet backflip"}, {"start": 169.96, "end": 174.96, "text": " \u2013 rolls, and much, much more with flying colors."}, {"start": 174.96, "end": 180.76000000000002, "text": " So now, the agent has the basics down and can even perform explosive dynamic motions as"}, {"start": 180.76000000000002, "end": 181.76000000000002, "text": " well."}, {"start": 181.76000000000002, "end": 183.96, "text": " So, it is time."}, {"start": 183.96, "end": 189.4, "text": " Now hold onto your papers as now comes the coolest part, we can perform different kinds"}, {"start": 189.4, "end": 192.20000000000002, "text": " of retargeting as well."}, {"start": 192.20000000000002, "end": 193.20000000000002, "text": " What is that?"}, {"start": 193.20000000000002, "end": 197.4, "text": " Well, one kind is retargeting the environment."}, {"start": 197.4, "end": 203.52, "text": " This means that we can teach the AI a landing motion in an idealized case, and then ask"}, {"start": 203.52, "end": 208.4, "text": " it to perform the same, but now, off of a tall ledge."}, {"start": 208.4, "end": 214.4, "text": " Or we can teach it to run, and then drop it into a computer game level and see if it performs"}, {"start": 214.4, "end": 216.04000000000002, "text": " well there."}, {"start": 216.04000000000002, "end": 217.84, "text": " And it really does."}, {"start": 217.84, "end": 219.0, "text": " Amazing."}, {"start": 219.0, "end": 224.4, "text": " This part is very important because in any reasonable industry use, these characters"}, {"start": 224.4, "end": 229.8, "text": " have to perform in a variety of environments that are different from the training environment."}, {"start": 229.8, "end": 235.4, "text": " Two is retargeting not the environment, but the body type."}, {"start": 235.4, "end": 240.08, "text": " We can have different types of characters learn the same motions."}, {"start": 240.08, "end": 245.64000000000001, "text": " This is pretty nice for the Atlas robot, which has a drastically different way distribution,"}, {"start": 245.64000000000001, "end": 250.28, "text": " and you can also see that the technique is robust against perturbations."}, {"start": 250.28, "end": 256.28, "text": " Yes, this means one of the favorite pastimes of a computer graphics researcher, which is"}, {"start": 256.28, "end": 262.04, "text": " throwing boxes at virtual characters and seeing how well it can take it."}, {"start": 262.04, "end": 267.92, "text": " Might as well make sure of the fact that in a simulated world, we make up all the rules."}, {"start": 267.92, "end": 270.48, "text": " This one is doing really well."}, {"start": 270.48, "end": 271.48, "text": " Oh."}, {"start": 271.48, "end": 277.2, "text": " Note that the Atlas robot is indeed different than the previous model, and these motions"}, {"start": 277.2, "end": 282.4, "text": " can be retargeted to it, however, this is also a humanoid."}, {"start": 282.4, "end": 285.32, "text": " Can we ask for non-humanoids as well, perhaps?"}, {"start": 285.32, "end": 286.92, "text": " Oh, yes."}, {"start": 286.92, "end": 294.52, "text": " This technique supports retargeting to T-Rexes, Dragons, Lions, you name it."}, {"start": 294.52, "end": 299.76, "text": " It can even get used to the gravity of different virtual planets that we dream up."}, {"start": 299.76, "end": 300.76, "text": " Bravo."}, {"start": 300.76, "end": 305.4, "text": " So the value proposition of this paper is completely out of this world."}, {"start": 305.4, "end": 312.0, "text": " When state initialization, early termination, retargeting to different body types, environments,"}, {"start": 312.0, "end": 313.28, "text": " oh my."}, {"start": 313.28, "end": 319.23999999999995, "text": " To have digital applications like computer games use this would already be amazing and just"}, {"start": 319.23999999999995, "end": 324.64, "text": " imagine what we could do if we could deploy these to real world robots."}, {"start": 324.64, "end": 329.52, "text": " And don't forget, these research works just keep on improving every year."}, {"start": 329.52, "end": 333.35999999999996, "text": " The first law of paper says that research is a process."}, {"start": 333.36, "end": 338.56, "text": " Do not look at where we are, look at where we will be two more papers down the line."}, {"start": 338.56, "end": 342.96000000000004, "text": " Now, fortunately, we can do that right now."}, {"start": 342.96000000000004, "end": 344.28000000000003, "text": " Why is that?"}, {"start": 344.28000000000003, "end": 350.6, "text": " It is because this paper is from 2018, which means that follow-up papers already exist."}, {"start": 350.6, "end": 355.56, "text": " What's more, we even discussed one that teaches these agents to not only reproduce these"}, {"start": 355.56, "end": 359.76, "text": " reference motions, but to do those with style."}, {"start": 359.76, "end": 365.08, "text": " Its style there meant that the agent is allowed to make creative deviations from the reference"}, {"start": 365.08, "end": 369.4, "text": " motion, thus developing its own way of doing it."}, {"start": 369.4, "end": 371.28, "text": " An amazing improvement."}, {"start": 371.28, "end": 375.52, "text": " And I wonder what researchers will come up with in the near future."}, {"start": 375.52, "end": 378.84, "text": " If you have some ideas, let me know in the comments below."}, {"start": 378.84, "end": 380.52, "text": " What a time to be alive."}, {"start": 380.52, "end": 384.0, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 384.0, "end": 389.96, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 389.96, "end": 396.96, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances."}, {"start": 396.96, "end": 404.32, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 404.32, "end": 409.84, "text": " Plus they are the only Cloud service with 48GB RTX 8000."}, {"start": 409.84, "end": 416.23999999999995, "text": " And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 416.23999999999995, "end": 418.03999999999996, "text": " workstations, or servers."}, {"start": 418.03999999999996, "end": 424.03999999999996, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 424.03999999999996, "end": 425.03999999999996, "text": " today."}, {"start": 425.03999999999996, "end": 429.52, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 429.52, "end": 430.52, "text": " for you."}, {"start": 430.52, "end": 457.96, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=x2zDrSgrlYQ
Beautiful Glitter Simulation…Faster Than Real Time! ✨
❤️ Check out the Gradient Dissent podcast by Weights & Biases: http://wandb.me/gd  📝 The paper "Procedural Physically based BRDF for Real-Time Rendering of Glints" is available here: http://igg.unistra.fr/People/chermain/real_time_glint/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir, this incredible paper promises procedural and physically-based rendering of glittering surfaces. Whoa! Okay, I am sold, provided that it does as well as advertised. We shall see about that. Oh goodness, the results look lovely. And now, while we marvel at these, let's discuss what these terms really mean. One, physically-based means that it is built on a foundation that is based in physics. That is not surprising, as the name Sassal. The surprise usually comes when people use the term without careful consideration. You see, light transport researchers take this term very seriously if you claim that your model is physically-based, you better bring your A-game. We will inspect that. Two, the procedural part means that we, ourselves, can algorithmically generate many of these material models, ourselves. For instance, this earlier paper was able to procedurally generate the geometry of climbing plans and simulate their growth. Or procedural generation can come to the rescue when we are creating a virtual environment and we need hundreds of different flowers, thousands of blades of grass and more. This is an actual example of procedural geometry that was created with the wonderful Tyrogen program. Three, rendering means a computer program that we run on our machine at home and it creates a beautiful series of images like these ones. In the case of photorealistic rendering, we typically need to wait from minutes to hours for every single image. So, how fast is this technique? Well, hold on to your papers because we don't need to wait from minutes to hours for every image. Instead, we only need two and a half milliseconds per frame. Absolute witchcraft. The fact that we can do this in real time blows my mind and yes, this means that we can test the procedural part in an interactive demo too. Here we can play with a set of parameters, look, the roughness of the surface is not set in stone, we can specify it and see as the material changes in real time. We can also change the density of micro facets. These are tiny imperfections in the surface that make these materials really come alive. And if we make the micro facet density much larger, the material becomes more diffuse. And if we make them really small, oh, loving this. So, as much as I love this, I would also like to know how accurate this is. For reference, here is a result from a previous technique that is really accurate. However, this is one of the methods that takes from minutes to hours for just one image. And here is the other end of the spectrum. This is a different technique that is lower in quality. However, in return, it can produce these in real time. So, which one is the new one closer to? The accurate, slow one, or the less accurate, quick one. What? Its quality is as good as the accurate one, and it also runs in real time. The best of both worlds. Wow! Now, of course, not even this technique is perfect. The fact that this particular example worked so well is great, but it doesn't always come out so well. And I know, I know, you're asking, can we try this new method? And the answer is, resounding yes, you can try it right now in multiple places. There is a web demo. And it was even implemented in shader toy. So, now we know what it means to render procedural and physically-based glittering surfaces in real time. Absolutely incredible! What a time to be alive! And if you enjoyed this episode, we may have two more incredible, glint rendering papers coming up in the near future. This... and this. Let me know in the comments below if you would like to see them. Write something like, yes please. This video has been supported by weights and biases. They have an amazing podcast by the name Gradient Descent, where they interview machine learning experts who discuss how they use learning-based algorithms to solve real-world problems. They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more. Perfect for a fellow scholar with an open mind. Make sure to visit them through wnb.me-gd or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir,"}, {"start": 4.76, "end": 12.4, "text": " this incredible paper promises procedural and physically-based rendering of glittering surfaces."}, {"start": 12.4, "end": 13.8, "text": " Whoa!"}, {"start": 13.8, "end": 19.400000000000002, "text": " Okay, I am sold, provided that it does as well as advertised."}, {"start": 19.400000000000002, "end": 21.0, "text": " We shall see about that."}, {"start": 23.0, "end": 26.6, "text": " Oh goodness, the results look lovely."}, {"start": 26.6, "end": 32.2, "text": " And now, while we marvel at these, let's discuss what these terms really mean."}, {"start": 32.2, "end": 38.0, "text": " One, physically-based means that it is built on a foundation that is based in physics."}, {"start": 38.0, "end": 41.400000000000006, "text": " That is not surprising, as the name Sassal."}, {"start": 41.400000000000006, "end": 47.0, "text": " The surprise usually comes when people use the term without careful consideration."}, {"start": 47.0, "end": 51.6, "text": " You see, light transport researchers take this term very seriously"}, {"start": 51.6, "end": 56.4, "text": " if you claim that your model is physically-based, you better bring your A-game."}, {"start": 56.4, "end": 58.2, "text": " We will inspect that."}, {"start": 58.2, "end": 64.0, "text": " Two, the procedural part means that we, ourselves, can algorithmically generate"}, {"start": 64.0, "end": 67.0, "text": " many of these material models, ourselves."}, {"start": 67.0, "end": 73.4, "text": " For instance, this earlier paper was able to procedurally generate the geometry of climbing plans"}, {"start": 73.4, "end": 75.8, "text": " and simulate their growth."}, {"start": 75.8, "end": 78.8, "text": " Or procedural generation can come to the rescue"}, {"start": 78.8, "end": 84.2, "text": " when we are creating a virtual environment and we need hundreds of different flowers,"}, {"start": 84.2, "end": 87.39999999999999, "text": " thousands of blades of grass and more."}, {"start": 87.39999999999999, "end": 94.6, "text": " This is an actual example of procedural geometry that was created with the wonderful Tyrogen program."}, {"start": 94.6, "end": 100.2, "text": " Three, rendering means a computer program that we run on our machine at home"}, {"start": 100.2, "end": 104.4, "text": " and it creates a beautiful series of images like these ones."}, {"start": 104.4, "end": 109.80000000000001, "text": " In the case of photorealistic rendering, we typically need to wait from minutes to hours"}, {"start": 109.80000000000001, "end": 112.2, "text": " for every single image."}, {"start": 112.2, "end": 115.80000000000001, "text": " So, how fast is this technique?"}, {"start": 115.80000000000001, "end": 121.60000000000001, "text": " Well, hold on to your papers because we don't need to wait from minutes to hours for every image."}, {"start": 121.60000000000001, "end": 127.0, "text": " Instead, we only need two and a half milliseconds per frame."}, {"start": 127.0, "end": 129.0, "text": " Absolute witchcraft."}, {"start": 129.0, "end": 134.4, "text": " The fact that we can do this in real time blows my mind and yes,"}, {"start": 134.4, "end": 139.4, "text": " this means that we can test the procedural part in an interactive demo too."}, {"start": 139.4, "end": 145.4, "text": " Here we can play with a set of parameters, look, the roughness of the surface is not set in stone,"}, {"start": 145.4, "end": 150.6, "text": " we can specify it and see as the material changes in real time."}, {"start": 150.6, "end": 153.8, "text": " We can also change the density of micro facets."}, {"start": 153.8, "end": 159.60000000000002, "text": " These are tiny imperfections in the surface that make these materials really come alive."}, {"start": 159.60000000000002, "end": 166.4, "text": " And if we make the micro facet density much larger, the material becomes more diffuse."}, {"start": 166.4, "end": 172.4, "text": " And if we make them really small, oh, loving this."}, {"start": 172.4, "end": 177.8, "text": " So, as much as I love this, I would also like to know how accurate this is."}, {"start": 177.8, "end": 182.4, "text": " For reference, here is a result from a previous technique that is really accurate."}, {"start": 182.4, "end": 188.8, "text": " However, this is one of the methods that takes from minutes to hours for just one image."}, {"start": 188.8, "end": 191.4, "text": " And here is the other end of the spectrum."}, {"start": 191.4, "end": 195.0, "text": " This is a different technique that is lower in quality."}, {"start": 195.0, "end": 200.0, "text": " However, in return, it can produce these in real time."}, {"start": 200.0, "end": 203.20000000000002, "text": " So, which one is the new one closer to?"}, {"start": 203.20000000000002, "end": 209.20000000000002, "text": " The accurate, slow one, or the less accurate, quick one."}, {"start": 209.2, "end": 217.2, "text": " What? Its quality is as good as the accurate one, and it also runs in real time."}, {"start": 217.2, "end": 220.79999999999998, "text": " The best of both worlds. Wow!"}, {"start": 220.79999999999998, "end": 223.79999999999998, "text": " Now, of course, not even this technique is perfect."}, {"start": 223.79999999999998, "end": 230.6, "text": " The fact that this particular example worked so well is great, but it doesn't always come out so well."}, {"start": 230.6, "end": 235.6, "text": " And I know, I know, you're asking, can we try this new method?"}, {"start": 235.6, "end": 241.79999999999998, "text": " And the answer is, resounding yes, you can try it right now in multiple places."}, {"start": 241.79999999999998, "end": 244.4, "text": " There is a web demo."}, {"start": 244.4, "end": 248.2, "text": " And it was even implemented in shader toy."}, {"start": 248.2, "end": 255.79999999999998, "text": " So, now we know what it means to render procedural and physically-based glittering surfaces in real time."}, {"start": 255.79999999999998, "end": 259.6, "text": " Absolutely incredible! What a time to be alive!"}, {"start": 259.6, "end": 267.20000000000005, "text": " And if you enjoyed this episode, we may have two more incredible, glint rendering papers coming up in the near future."}, {"start": 267.20000000000005, "end": 270.8, "text": " This... and this."}, {"start": 270.8, "end": 274.20000000000005, "text": " Let me know in the comments below if you would like to see them."}, {"start": 274.20000000000005, "end": 276.8, "text": " Write something like, yes please."}, {"start": 276.8, "end": 280.0, "text": " This video has been supported by weights and biases."}, {"start": 280.0, "end": 283.6, "text": " They have an amazing podcast by the name Gradient Descent,"}, {"start": 283.6, "end": 291.40000000000003, "text": " where they interview machine learning experts who discuss how they use learning-based algorithms to solve real-world problems."}, {"start": 291.40000000000003, "end": 299.0, "text": " They've discussed biology, teaching robots, machine learning in outer space, and a whole lot more."}, {"start": 299.0, "end": 302.0, "text": " Perfect for a fellow scholar with an open mind."}, {"start": 302.0, "end": 309.40000000000003, "text": " Make sure to visit them through wnb.me-gd or just click the link in the video description."}, {"start": 309.4, "end": 315.79999999999995, "text": " Our thanks to weights and biases for their long-standing support and for helping us make better videos for you."}, {"start": 315.8, "end": 345.6, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9RzCZZBjlxM
AI “Artist” Creates Near-Perfect Toonifications! 👩‍🎨
❤️ Check out Weights & Biases and sign up for a free demo here: https://wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/getting-started/reports/Debug-Compare-Reproduce-Machine-Learning-Models--VmlldzoyNzY5MDk?utm_source=karoly 📝 The paper "ReStyle: A Residual-Based StyleGAN Encoder via Iterative Refinement" is available here: https://yuval-alaluf.github.io/restyle-encoder/ 📝 Our material synthesis paper is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 📝 The font manifold paper is available here: http://vecg.cs.ucl.ac.uk/Projects/projects_fonts/projects_fonts.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Paper Sir Dr. Carlos Jona Ifejir. Today we are going to generate human faces and even better we will keep them intact. You will see what that means in a moment. This new neural network based technique can dream up completely new images and more. However, this is not the first technique to do this, but this. This does them better. Let's look at three amazing features that it offers and then discuss how and why it is better than its predecessors. Hold on to your papers for the first example, which is my favorite image tunification. Would you like to see what the AI thinks you would look like if you were a Disney character? Well, here you go. And these are not some rudimentary, first paper in the works kind of results. These are proper tunifications. You could ask for money for some of these and they are done completely automatically by a learning algorithm. At the end of the video, you will also witness as I myself get tunified. And what is even cooler is that we can not only produce these still images, but even compute intermediate images between two input photos and get meaningful results. I'll stop the process here and there to show you how good these are. I am blown away. Two, it can also perform the usual suspects. For instance, it can make us older or younger, or put a smile on our face too. However, three, it works not only on human faces, but cars, animals, and buildings too. So, the results are all great, but how does all this wizardry happen? Well, we take an image and embed it into a latent space and in this space we can easily apply modifications. Okay, but what is this latent space thing? A latent space is a made up place where we are trying to organize data in a way that similar things are close to each other. What you see here is a 2D latent space for generating different fonts. It is hard to explain why these fonts are similar, but most of us would agree that they indeed share some common properties. The cool thing here is that we can explore this latent space with our cursor and generate all kinds of new fonts. You can try this work in your browser. The link is available in the video description. And, luckily, we can build a latent space not only for fonts, but for nearly anything. I am a light transport researcher by trade, so in this earlier paper we were interested in generating hundreds of variants of a material model to populate this scene. In this latent space, we can concoct all of these really cool digital material models. A link to this work is also available in the video description. Now, for the face generator algorithms, this embedding step is typically imperfect, which means that we might lose some information during the process. In the better cases, things may look a little different, but does not even the worst case scenario I'll show you that in a moment. For the milder case, here is an earlier example from a paper by the name Styrofo, where the authors embedded me into a latent space and it indeed came out a little different. But, not so bad. A later work Starclip was able to make me look like Obi-Wan Kenobi, which is excellent. However, the embedding step was more imperfect. The bearded image was embedded like this. You are probably saying that this looks different, but even this is not so bad, if you want to see a much worse example, look at this. My goodness, now this is quite different. Now that we saw what it could do, it is time to ask the big question, how much better is it than previous works? Do we have an AB test for that? And the answer is yes, of course. Let's embed this gentleman and see how he comes out on the other end. Well, without the improvements of this paper, once again, quite different. The beard is nearly gone, and when we tuneify the image, let's see, yep, that beard is gone for good. So, can this paper get that beard back? Let's see. Oh yes, if we refine the embedding with this new method, we get that glorious beard back. That is one heck of a tune image. Congratulations, loving it. And now it's my turn. One of the results was amazing. I really like this one. And how about this? Well, not bad. And I wonder if it can deal with sunglasses. Well, kind of, but not in the way you might think. What do you think? Let me know in the comments below. Note that you can only see these results here on two minute papers and a big thank you to the authors for taking time off their busy day and doing these experiments for us. And here are a few more tests. Let's see how it fares with these. The inputs are a diverse set of images from different classes and the initial embeddings are, well, a bit of a disappointment. But that is kind of the point because this new technique does not let it stop there and iteratively improve them. Yes, getting better. And by the end, my goodness, very close to the input. Don't forget that the goal here is not to implement a copying machine. The key difference is that we can't do too much with the input image. But after the embedding step, we can do all these tunification and other kinds of magic with it and the results are only relevant as long as the two images are close. And they are really close. Bravo. So good. So I hope that now you agree that the pace of progress in machine learning and synthetic image generation is absolutely incredible. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their reports to explain how your model works, show plots of how model versions improved, discuss bugs and demonstrate progress towards milestones. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub and more. And the best part is that weight and biases is free for all individuals, academics and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is 2 Minute Paper Sir Dr. Carlos Jona Ifejir."}, {"start": 4.64, "end": 11.36, "text": " Today we are going to generate human faces and even better we will keep them intact."}, {"start": 11.36, "end": 13.92, "text": " You will see what that means in a moment."}, {"start": 13.92, "end": 20.16, "text": " This new neural network based technique can dream up completely new images and more."}, {"start": 20.16, "end": 24.400000000000002, "text": " However, this is not the first technique to do this, but this."}, {"start": 24.400000000000002, "end": 26.400000000000002, "text": " This does them better."}, {"start": 26.4, "end": 34.64, "text": " Let's look at three amazing features that it offers and then discuss how and why it is better than its predecessors."}, {"start": 34.64, "end": 40.72, "text": " Hold on to your papers for the first example, which is my favorite image tunification."}, {"start": 40.72, "end": 46.56, "text": " Would you like to see what the AI thinks you would look like if you were a Disney character?"}, {"start": 46.56, "end": 49.28, "text": " Well, here you go."}, {"start": 49.28, "end": 54.480000000000004, "text": " And these are not some rudimentary, first paper in the works kind of results."}, {"start": 54.48, "end": 56.879999999999995, "text": " These are proper tunifications."}, {"start": 56.879999999999995, "end": 63.44, "text": " You could ask for money for some of these and they are done completely automatically by a learning algorithm."}, {"start": 65.75999999999999, "end": 70.56, "text": " At the end of the video, you will also witness as I myself get tunified."}, {"start": 71.28, "end": 76.0, "text": " And what is even cooler is that we can not only produce these still images,"}, {"start": 76.0, "end": 82.8, "text": " but even compute intermediate images between two input photos and get meaningful results."}, {"start": 82.8, "end": 87.2, "text": " I'll stop the process here and there to show you how good these are."}, {"start": 90.0, "end": 91.6, "text": " I am blown away."}, {"start": 92.39999999999999, "end": 95.12, "text": " Two, it can also perform the usual suspects."}, {"start": 95.92, "end": 98.8, "text": " For instance, it can make us older or younger,"}, {"start": 101.28, "end": 103.67999999999999, "text": " or put a smile on our face too."}, {"start": 106.16, "end": 110.47999999999999, "text": " However, three, it works not only on human faces, but cars,"}, {"start": 110.48, "end": 114.96000000000001, "text": " animals, and buildings too."}, {"start": 114.96000000000001, "end": 120.88000000000001, "text": " So, the results are all great, but how does all this wizardry happen?"}, {"start": 120.88000000000001, "end": 127.92, "text": " Well, we take an image and embed it into a latent space and in this space we can easily apply"}, {"start": 127.92, "end": 128.96, "text": " modifications."}, {"start": 128.96, "end": 132.8, "text": " Okay, but what is this latent space thing?"}, {"start": 132.8, "end": 140.08, "text": " A latent space is a made up place where we are trying to organize data in a way that similar things"}, {"start": 140.08, "end": 141.76000000000002, "text": " are close to each other."}, {"start": 141.76000000000002, "end": 146.56, "text": " What you see here is a 2D latent space for generating different fonts."}, {"start": 147.20000000000002, "end": 154.24, "text": " It is hard to explain why these fonts are similar, but most of us would agree that they indeed share"}, {"start": 154.24, "end": 155.44, "text": " some common properties."}, {"start": 156.24, "end": 161.84, "text": " The cool thing here is that we can explore this latent space with our cursor and generate all"}, {"start": 161.84, "end": 163.28, "text": " kinds of new fonts."}, {"start": 163.28, "end": 165.12, "text": " You can try this work in your browser."}, {"start": 165.12, "end": 167.52, "text": " The link is available in the video description."}, {"start": 167.52, "end": 174.8, "text": " And, luckily, we can build a latent space not only for fonts, but for nearly anything."}, {"start": 174.8, "end": 180.56, "text": " I am a light transport researcher by trade, so in this earlier paper we were interested in generating"}, {"start": 180.56, "end": 184.56, "text": " hundreds of variants of a material model to populate this scene."}, {"start": 185.76000000000002, "end": 190.88, "text": " In this latent space, we can concoct all of these really cool digital material models."}, {"start": 191.84, "end": 195.20000000000002, "text": " A link to this work is also available in the video description."}, {"start": 195.2, "end": 201.44, "text": " Now, for the face generator algorithms, this embedding step is typically imperfect,"}, {"start": 201.44, "end": 205.44, "text": " which means that we might lose some information during the process."}, {"start": 206.0, "end": 212.23999999999998, "text": " In the better cases, things may look a little different, but does not even the worst case scenario"}, {"start": 212.23999999999998, "end": 213.67999999999998, "text": " I'll show you that in a moment."}, {"start": 214.32, "end": 220.0, "text": " For the milder case, here is an earlier example from a paper by the name Styrofo,"}, {"start": 220.0, "end": 226.16, "text": " where the authors embedded me into a latent space and it indeed came out a little different."}, {"start": 226.88, "end": 233.84, "text": " But, not so bad. A later work Starclip was able to make me look like Obi-Wan Kenobi,"}, {"start": 233.84, "end": 238.4, "text": " which is excellent. However, the embedding step was more imperfect."}, {"start": 239.04, "end": 241.28, "text": " The bearded image was embedded like this."}, {"start": 242.08, "end": 247.04, "text": " You are probably saying that this looks different, but even this is not so bad,"}, {"start": 247.04, "end": 250.95999999999998, "text": " if you want to see a much worse example, look at this."}, {"start": 252.07999999999998, "end": 254.79999999999998, "text": " My goodness, now this is quite different."}, {"start": 255.68, "end": 259.92, "text": " Now that we saw what it could do, it is time to ask the big question,"}, {"start": 260.4, "end": 262.8, "text": " how much better is it than previous works?"}, {"start": 263.36, "end": 267.76, "text": " Do we have an AB test for that? And the answer is yes, of course."}, {"start": 268.24, "end": 272.8, "text": " Let's embed this gentleman and see how he comes out on the other end."}, {"start": 272.8, "end": 278.40000000000003, "text": " Well, without the improvements of this paper, once again, quite different."}, {"start": 278.40000000000003, "end": 283.44, "text": " The beard is nearly gone, and when we tuneify the image, let's see,"}, {"start": 283.44, "end": 286.88, "text": " yep, that beard is gone for good."}, {"start": 286.88, "end": 291.52, "text": " So, can this paper get that beard back? Let's see."}, {"start": 291.52, "end": 298.40000000000003, "text": " Oh yes, if we refine the embedding with this new method, we get that glorious beard back."}, {"start": 298.4, "end": 303.2, "text": " That is one heck of a tune image. Congratulations, loving it."}, {"start": 303.91999999999996, "end": 309.44, "text": " And now it's my turn. One of the results was amazing. I really like this one."}, {"start": 310.79999999999995, "end": 318.4, "text": " And how about this? Well, not bad. And I wonder if it can deal with sunglasses."}, {"start": 320.15999999999997, "end": 326.23999999999995, "text": " Well, kind of, but not in the way you might think. What do you think? Let me know in the comments"}, {"start": 326.24, "end": 332.88, "text": " below. Note that you can only see these results here on two minute papers and a big thank you to"}, {"start": 332.88, "end": 337.92, "text": " the authors for taking time off their busy day and doing these experiments for us."}, {"start": 338.72, "end": 344.88, "text": " And here are a few more tests. Let's see how it fares with these. The inputs are a diverse set"}, {"start": 344.88, "end": 351.28000000000003, "text": " of images from different classes and the initial embeddings are, well, a bit of a disappointment."}, {"start": 351.28, "end": 357.84, "text": " But that is kind of the point because this new technique does not let it stop there and iteratively"}, {"start": 357.84, "end": 366.47999999999996, "text": " improve them. Yes, getting better. And by the end, my goodness, very close to the input."}, {"start": 366.47999999999996, "end": 372.64, "text": " Don't forget that the goal here is not to implement a copying machine. The key difference is that we"}, {"start": 372.64, "end": 378.64, "text": " can't do too much with the input image. But after the embedding step, we can do all these"}, {"start": 378.64, "end": 384.96, "text": " tunification and other kinds of magic with it and the results are only relevant as long as the"}, {"start": 384.96, "end": 394.08, "text": " two images are close. And they are really close. Bravo. So good. So I hope that now you agree that"}, {"start": 394.08, "end": 399.84, "text": " the pace of progress in machine learning and synthetic image generation is absolutely incredible."}, {"start": 400.4, "end": 406.32, "text": " What a time to be alive. This episode has been supported by weights and biases. In this post,"}, {"start": 406.32, "end": 412.4, "text": " they show you how to use their reports to explain how your model works, show plots of how model"}, {"start": 412.4, "end": 418.48, "text": " versions improved, discuss bugs and demonstrate progress towards milestones. Weight and biases"}, {"start": 418.48, "end": 423.52, "text": " provides tools to track your experiments in your deep learning projects. Their system is designed"}, {"start": 423.52, "end": 428.96, "text": " to save you a ton of time and money and it is actively used in projects at prestigious labs,"}, {"start": 428.96, "end": 435.36, "text": " such as OpenAI, Toyota Research, GitHub and more. And the best part is that weight and biases"}, {"start": 435.36, "end": 442.32, "text": " is free for all individuals, academics and open source projects. It really is as good as it gets."}, {"start": 442.32, "end": 448.96000000000004, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video description"}, {"start": 448.96000000000004, "end": 454.32, "text": " and you can get a free demo today. Our thanks to weights and biases for their long standing support"}, {"start": 454.32, "end": 459.36, "text": " and for helping us make better videos for you. Thanks for watching and for your generous support"}, {"start": 459.36, "end": 469.36, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=3IFLVOaFAus
Can An AI Perform A Cartwheel? 🤸‍♂️
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Learning and Exploring Motor Skills with Spacetime Bounds" is available here: https://milkpku.github.io/project/spacetime.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jornai-Fehir. Today, we will see how this AIB technique can help our virtual characters not only learn new movements, but they will even perform them with style. Now, here you see a piece of reference motion. This is what we would like our virtual character to learn. The task is to then enter a physics simulation, where we try to find the correct joint angles and movements to perform that. Of course, this is already a challenge because even a small difference in joint positions can make a great deal of difference in the output. Then, the second more difficult task is to do this with style. No two people perform cartwheel exactly the same way, so would it be possible to have our virtual characters imbued with style so that they, much like people, would have their own kinds of movement. Is that possible somehow? Well, let's have a look at the simulated characters. Nice, so this chap surely learned to at the very least reproduce the reference motion, but let's stop the footage here and there and look for differences. Oh yes, this is indeed different. This virtual character indeed has its own style, but at the same time, it is still faithful to the original reference motion. This is a magnificent solution to a very difficult task, and the authors made it look deceptively easy, but you will see in a moment that this is really challenging. So, how does all this magic happen? How do we imbue these virtual characters with style? Well, let's define style as creative deviation from the reference motion, so it can be different, but not too different, or else this happens. So, what are we seeing here? Here, with green, you see the algorithm's estimation of the center of mass for this character, and our goal would be to reproduce that as faithfully as possible. That would be the copying machine solution. Here comes the key for style, and that key is using space-time bounds. This means that the center of mass of the character can deviate from the original, but only as long as it remains strictly within these boundaries, and that is where the style emerges. If we wish to add a little style to the equation, we can set relatively loose space-time bounds around it, leaving room for the AI to explore. If we wish to strictly reproduce the reference motion, we can set the bounds to be really tight instead. This is a great technique to learn running, jumping, rolling behaviors, and it can even perform a stylish cartwheel, and backflips. Oh yeah, loving it! These space-time bounds also help us retarget the motion to different virtual body types. It also helps us savage really bad quality reference motions and makes something useful out of them. So, are we done here? Is that all? No, not in the slightest. Now, hold on to your papers because here comes the best part. With these novel space-time bounds, we can specify additional stylistic choices to the character moves. For instance, we can encourage the character to use more energy for a more intense dancing sequence, or we can make it sleepier by asking it to decrease its energy use. And I wonder if we can put bounds on the energy use, can we do more? For instance, do the same with body volume use. Oh yeah, this really opens up new kinds of motions that I haven't seen virtual characters perform yet. For instance, this chap was encouraged to use its entire body volume for a walk, and thus looks like someone who is clearly looking for trouble. And this poor thing just finished their paper for a conference deadline and is barely alive. We can even mix multiple motions together. For instance, what could be a combination of a regular running sequence and a band walk? Well, this. And if we have a standard running sequence and a happy walk, we can fuse them into a happy running sequence. How cool is that? So, with this technique, we can finally not only teach virtual characters to perform nearly any kind of reference motion, but we can even ask them to do this with style. What an incredible idea, loving it. Now, before we go, I would like to show you a short message that we got that melted my heart. This I got from Nathan, who has been inspired by these incredible works, and he decided to turn his life around and go back to study more. I love my job, and reading messages like this is one of the absolute best parts of it. Congratulations, Nathan. Thank you so much and good luck. If you feel that you have a similar story with this video series, make sure to let us know in the comments. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Jornai-Fehir."}, {"start": 4.8, "end": 11.6, "text": " Today, we will see how this AIB technique can help our virtual characters not only learn"}, {"start": 11.6, "end": 16.32, "text": " new movements, but they will even perform them with style."}, {"start": 16.32, "end": 20.0, "text": " Now, here you see a piece of reference motion."}, {"start": 20.0, "end": 23.48, "text": " This is what we would like our virtual character to learn."}, {"start": 23.48, "end": 27.0, "text": " The task is to then enter a physics simulation,"}, {"start": 27.0, "end": 32.44, "text": " where we try to find the correct joint angles and movements to perform that."}, {"start": 32.44, "end": 38.120000000000005, "text": " Of course, this is already a challenge because even a small difference in joint positions"}, {"start": 38.120000000000005, "end": 41.4, "text": " can make a great deal of difference in the output."}, {"start": 41.4, "end": 46.879999999999995, "text": " Then, the second more difficult task is to do this with style."}, {"start": 46.879999999999995, "end": 50.8, "text": " No two people perform cartwheel exactly the same way,"}, {"start": 50.8, "end": 56.0, "text": " so would it be possible to have our virtual characters imbued with style"}, {"start": 56.0, "end": 61.12, "text": " so that they, much like people, would have their own kinds of movement."}, {"start": 61.12, "end": 63.44, "text": " Is that possible somehow?"}, {"start": 63.44, "end": 66.88, "text": " Well, let's have a look at the simulated characters."}, {"start": 66.88, "end": 73.12, "text": " Nice, so this chap surely learned to at the very least reproduce the reference motion,"}, {"start": 73.12, "end": 78.36, "text": " but let's stop the footage here and there and look for differences."}, {"start": 78.36, "end": 81.4, "text": " Oh yes, this is indeed different."}, {"start": 81.4, "end": 86.2, "text": " This virtual character indeed has its own style, but at the same time,"}, {"start": 86.2, "end": 90.12, "text": " it is still faithful to the original reference motion."}, {"start": 90.12, "end": 94.52000000000001, "text": " This is a magnificent solution to a very difficult task,"}, {"start": 94.52000000000001, "end": 97.96000000000001, "text": " and the authors made it look deceptively easy,"}, {"start": 97.96000000000001, "end": 102.56, "text": " but you will see in a moment that this is really challenging."}, {"start": 102.56, "end": 105.80000000000001, "text": " So, how does all this magic happen?"}, {"start": 105.80000000000001, "end": 109.80000000000001, "text": " How do we imbue these virtual characters with style?"}, {"start": 109.8, "end": 115.0, "text": " Well, let's define style as creative deviation from the reference motion,"}, {"start": 115.0, "end": 121.47999999999999, "text": " so it can be different, but not too different, or else this happens."}, {"start": 121.47999999999999, "end": 123.96, "text": " So, what are we seeing here?"}, {"start": 123.96, "end": 130.35999999999999, "text": " Here, with green, you see the algorithm's estimation of the center of mass for this character,"}, {"start": 130.35999999999999, "end": 135.07999999999998, "text": " and our goal would be to reproduce that as faithfully as possible."}, {"start": 135.07999999999998, "end": 138.2, "text": " That would be the copying machine solution."}, {"start": 138.2, "end": 143.72, "text": " Here comes the key for style, and that key is using space-time bounds."}, {"start": 143.72, "end": 148.76, "text": " This means that the center of mass of the character can deviate from the original,"}, {"start": 148.76, "end": 154.2, "text": " but only as long as it remains strictly within these boundaries,"}, {"start": 154.2, "end": 159.16, "text": " and that is where the style emerges."}, {"start": 159.16, "end": 162.2, "text": " If we wish to add a little style to the equation,"}, {"start": 162.2, "end": 165.95999999999998, "text": " we can set relatively loose space-time bounds around it,"}, {"start": 165.96, "end": 169.48000000000002, "text": " leaving room for the AI to explore."}, {"start": 169.48000000000002, "end": 173.16, "text": " If we wish to strictly reproduce the reference motion,"}, {"start": 173.16, "end": 176.68, "text": " we can set the bounds to be really tight instead."}, {"start": 176.68, "end": 179.72, "text": " This is a great technique to learn running,"}, {"start": 179.72, "end": 186.44, "text": " jumping, rolling behaviors, and it can even perform a stylish cartwheel,"}, {"start": 186.44, "end": 188.68, "text": " and backflips."}, {"start": 188.68, "end": 191.56, "text": " Oh yeah, loving it!"}, {"start": 191.56, "end": 198.12, "text": " These space-time bounds also help us retarget the motion to different virtual body types."}, {"start": 198.12, "end": 202.04, "text": " It also helps us savage really bad quality reference motions"}, {"start": 202.04, "end": 205.4, "text": " and makes something useful out of them."}, {"start": 205.4, "end": 207.64000000000001, "text": " So, are we done here?"}, {"start": 207.64000000000001, "end": 209.0, "text": " Is that all?"}, {"start": 209.0, "end": 211.32, "text": " No, not in the slightest."}, {"start": 211.32, "end": 215.8, "text": " Now, hold on to your papers because here comes the best part."}, {"start": 215.8, "end": 217.8, "text": " With these novel space-time bounds,"}, {"start": 217.8, "end": 222.36, "text": " we can specify additional stylistic choices to the character moves."}, {"start": 222.36, "end": 226.52, "text": " For instance, we can encourage the character to use more energy"}, {"start": 226.52, "end": 229.64000000000001, "text": " for a more intense dancing sequence,"}, {"start": 229.64000000000001, "end": 237.48000000000002, "text": " or we can make it sleepier by asking it to decrease its energy use."}, {"start": 237.48000000000002, "end": 241.32000000000002, "text": " And I wonder if we can put bounds on the energy use,"}, {"start": 241.32000000000002, "end": 242.92000000000002, "text": " can we do more?"}, {"start": 242.92000000000002, "end": 246.68, "text": " For instance, do the same with body volume use."}, {"start": 246.68, "end": 250.28, "text": " Oh yeah, this really opens up new kinds of motions"}, {"start": 250.28, "end": 253.72, "text": " that I haven't seen virtual characters perform yet."}, {"start": 253.72, "end": 259.40000000000003, "text": " For instance, this chap was encouraged to use its entire body volume for a walk,"}, {"start": 259.40000000000003, "end": 263.32, "text": " and thus looks like someone who is clearly looking for trouble."}, {"start": 264.2, "end": 269.96000000000004, "text": " And this poor thing just finished their paper for a conference deadline and is barely alive."}, {"start": 269.96, "end": 276.44, "text": " We can even mix multiple motions together."}, {"start": 276.44, "end": 282.91999999999996, "text": " For instance, what could be a combination of a regular running sequence and a band walk?"}, {"start": 282.91999999999996, "end": 285.47999999999996, "text": " Well, this."}, {"start": 285.47999999999996, "end": 292.52, "text": " And if we have a standard running sequence and a happy walk,"}, {"start": 292.52, "end": 296.12, "text": " we can fuse them into a happy running sequence."}, {"start": 296.12, "end": 298.12, "text": " How cool is that?"}, {"start": 298.12, "end": 304.28000000000003, "text": " So, with this technique, we can finally not only teach virtual characters to perform"}, {"start": 304.28000000000003, "end": 306.36, "text": " nearly any kind of reference motion,"}, {"start": 307.0, "end": 310.52, "text": " but we can even ask them to do this with style."}, {"start": 311.32, "end": 313.96, "text": " What an incredible idea, loving it."}, {"start": 314.92, "end": 320.52, "text": " Now, before we go, I would like to show you a short message that we got that melted my heart."}, {"start": 321.24, "end": 325.8, "text": " This I got from Nathan, who has been inspired by these incredible works,"}, {"start": 325.8, "end": 330.28000000000003, "text": " and he decided to turn his life around and go back to study more."}, {"start": 331.0, "end": 336.6, "text": " I love my job, and reading messages like this is one of the absolute best parts of it."}, {"start": 337.32, "end": 338.76, "text": " Congratulations, Nathan."}, {"start": 338.76, "end": 340.76, "text": " Thank you so much and good luck."}, {"start": 341.24, "end": 345.56, "text": " If you feel that you have a similar story with this video series,"}, {"start": 345.56, "end": 347.48, "text": " make sure to let us know in the comments."}, {"start": 348.2, "end": 351.72, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 351.72, "end": 357.72, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 357.72, "end": 364.52000000000004, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances,"}, {"start": 364.52000000000004, "end": 372.12, "text": " and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 372.12, "end": 377.48, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 377.48, "end": 384.04, "text": " Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 384.04, "end": 385.8, "text": " workstations, or servers."}, {"start": 385.8, "end": 392.52000000000004, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 392.52000000000004, "end": 398.20000000000005, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 398.2, "end": 408.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=yc1WpkthV3g
This AI Made Me Look Like Obi-Wan Kenobi! 🧔
❤️ Check out Fully Connected by Weights & Biases: https://wandb.me/papers 📝 The paper "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" is available here: - https://arxiv.org/abs/2103.17249 - https://github.com/orpatashnik/StyleCLIP 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today's paper is about creating synthetic human faces and not only that, but it can also make me look like Obi-Wan Kenobi. You will see the rest of this footage in a few minutes. Now, of course, this is not the first paper to generate artificial human faces. For instance, in December 2019, a technique by the name Stagian II was published. This is a neural network based learning algorithm that is capable of synthesizing these eye-poppingly detailed images of human beings that don't even exist. This work answered some questions and, as any good paper, raised many more good ones. For instance, generating images of virtual humans is fine, but what if the results are not exactly what we are looking for? Can we have some artistic control over the outputs? How do we even tell the AI what we are looking for? Well, we are in luck because Stagian II offers somewhat rudimentary control over the outputs where we can give it input images of two people and fuse them together. Now that is absolutely amazing, but I wonder if we can ask for a little more. Can we get even more granular control over these images? What if we could just type in what we are looking for and somehow the AI would understand and execute our wishes? Is that possible or is that science fiction? Well, hold on to your papers and let's see. This new technique works as follows. We type what aspect of the input image we wish to change and what the change should be. Wow, really cool. And we can even play with these sliders to adjust the magnitude of the changes as well. This means that we can give someone a new hairstyle, add or remove makeup, or give them some wrinkles for good measure. Now the original Stagian II method worked on not only humans, but on a multitude of different classes too. And the new technique also inherits this property. Look, we can even design new car shapes, make them a little sportier, or make our adorable cat even cuter. For some definition of cuter, of course. We can even make their hair longer, or change their colors and the results are of super high quality. Absolutely stunning. While we are enjoying some more results here, make sure to have a look at the paper in the video description. And if you do, you will find that we really just scratch the surface here. For instance, it can even add clouds to the background of an image, or redesign the architecture of buildings, and much, much more. There are also comparisons against previous methods in there showcasing the improvements of the new method. And now let's experiment a little on me. Look, this is me here, after I got locked up for dropping my papers. And I spent so long in there that I grew a beard. Or I mean a previous learning based AI called Styrofo gave me one. And since dropping your papers is a serious crime, the sentence is long. Quite long. Ouch, I hereby promise to never drop my papers ever again. So now let's try to move forward with this image and give it to this new algorithm for some additional work. This is the original. And by original, I mean the image with the added algorithmic beard from a previous AI. And this is the embedded version of the image. This image looks a little different. Why is that? It is because Stagen 2 runs an embedding operation on the photo before starting its work. This is its own internal understanding of my image if you will. This is great information and is something that we can only experience if we have hands-on experience with the algorithm. And now let's use this new technique to apply some more magic to this image. This is where the goodness happens. And, oh my, it does not disappoint. You see, it can gradually transform me into Obi-Wan Kenobi, an elegant algorithm for a more civilized age. But that's not all. It can also create a ginger caroy, hippie caroy, caroy who found a shiny new paper on fluid simulations, and caroy who read set paper outside for quite a while and was perhaps disappointed. And now hold on to your papers and please welcome Dr. Karolina Jean-Eiffahier. And one more, I apologize in advance, rockstar caroy with a Mohawk. How cool is that? I would like to send a huge thank you to the authors for taking their time out of their workday to create these images only for us. You can really only see this here on two-minute papers. And as you see, the piece of progress in machine learning research is absolutely stunning. And with this, the limit of our artistic workflow is not going to be our mechanical skills, but only our imagination. What a time to be alive! This video has been supported by weights and biases. Check out the recent offering fully connected, a place where they bring machine learning practitioners together to share and discuss their ideas, learn from industry leaders, and even collaborate on projects together. You see, I get messages from you fellow scholars telling me that you have been inspired by the series, but don't really know where to start. And here it is. Fully connected is a great way to learn about the fundamentals, how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to visit them through wnb.me slash papers or just click the link in the video description. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 10.16, "text": " Today's paper is about creating synthetic human faces and not only that,"}, {"start": 10.16, "end": 13.68, "text": " but it can also make me look like Obi-Wan Kenobi."}, {"start": 13.68, "end": 17.04, "text": " You will see the rest of this footage in a few minutes."}, {"start": 17.04, "end": 22.32, "text": " Now, of course, this is not the first paper to generate artificial human faces."}, {"start": 22.32, "end": 28.48, "text": " For instance, in December 2019, a technique by the name Stagian II was published."}, {"start": 28.48, "end": 33.28, "text": " This is a neural network based learning algorithm that is capable of synthesizing"}, {"start": 33.28, "end": 38.08, "text": " these eye-poppingly detailed images of human beings that don't even exist."}, {"start": 38.8, "end": 44.4, "text": " This work answered some questions and, as any good paper, raised many more good ones."}, {"start": 45.120000000000005, "end": 51.84, "text": " For instance, generating images of virtual humans is fine, but what if the results are not exactly"}, {"start": 51.84, "end": 56.400000000000006, "text": " what we are looking for? Can we have some artistic control over the outputs?"}, {"start": 56.4, "end": 60.08, "text": " How do we even tell the AI what we are looking for?"}, {"start": 60.8, "end": 66.88, "text": " Well, we are in luck because Stagian II offers somewhat rudimentary control over the outputs"}, {"start": 66.88, "end": 71.84, "text": " where we can give it input images of two people and fuse them together."}, {"start": 73.12, "end": 79.03999999999999, "text": " Now that is absolutely amazing, but I wonder if we can ask for a little more."}, {"start": 79.03999999999999, "end": 82.64, "text": " Can we get even more granular control over these images?"}, {"start": 82.64, "end": 90.48, "text": " What if we could just type in what we are looking for and somehow the AI would understand and execute"}, {"start": 90.48, "end": 95.04, "text": " our wishes? Is that possible or is that science fiction?"}, {"start": 95.04, "end": 101.28, "text": " Well, hold on to your papers and let's see. This new technique works as follows."}, {"start": 101.28, "end": 107.28, "text": " We type what aspect of the input image we wish to change and what the change should be."}, {"start": 107.28, "end": 118.72, "text": " Wow, really cool. And we can even play with these sliders to adjust the magnitude of the changes"}, {"start": 118.72, "end": 125.04, "text": " as well. This means that we can give someone a new hairstyle, add or remove makeup,"}, {"start": 128.48, "end": 130.88, "text": " or give them some wrinkles for good measure."}, {"start": 130.88, "end": 140.24, "text": " Now the original Stagian II method worked on not only humans, but on a multitude of different classes too."}, {"start": 141.35999999999999, "end": 148.72, "text": " And the new technique also inherits this property. Look, we can even design new car shapes,"}, {"start": 148.72, "end": 153.68, "text": " make them a little sportier, or make our adorable cat even cuter."}, {"start": 153.68, "end": 159.6, "text": " For some definition of cuter, of course. We can even make their hair longer,"}, {"start": 165.92000000000002, "end": 170.64000000000001, "text": " or change their colors and the results are of super high quality."}, {"start": 170.64000000000001, "end": 175.68, "text": " Absolutely stunning. While we are enjoying some more results here,"}, {"start": 175.68, "end": 178.8, "text": " make sure to have a look at the paper in the video description."}, {"start": 178.8, "end": 184.32000000000002, "text": " And if you do, you will find that we really just scratch the surface here. For instance,"}, {"start": 184.32000000000002, "end": 191.28, "text": " it can even add clouds to the background of an image, or redesign the architecture of buildings,"}, {"start": 191.28, "end": 197.44, "text": " and much, much more. There are also comparisons against previous methods in there showcasing the"}, {"start": 197.44, "end": 205.04000000000002, "text": " improvements of the new method. And now let's experiment a little on me. Look, this is me here,"}, {"start": 205.04, "end": 212.16, "text": " after I got locked up for dropping my papers. And I spent so long in there that I grew a beard."}, {"start": 212.95999999999998, "end": 219.35999999999999, "text": " Or I mean a previous learning based AI called Styrofo gave me one. And since dropping your"}, {"start": 219.35999999999999, "end": 227.68, "text": " papers is a serious crime, the sentence is long. Quite long. Ouch, I hereby promise to never drop"}, {"start": 227.68, "end": 234.39999999999998, "text": " my papers ever again. So now let's try to move forward with this image and give it to this new"}, {"start": 234.4, "end": 241.92000000000002, "text": " algorithm for some additional work. This is the original. And by original, I mean the image with"}, {"start": 241.92000000000002, "end": 248.48000000000002, "text": " the added algorithmic beard from a previous AI. And this is the embedded version of the image."}, {"start": 249.36, "end": 257.2, "text": " This image looks a little different. Why is that? It is because Stagen 2 runs an embedding"}, {"start": 257.2, "end": 263.68, "text": " operation on the photo before starting its work. This is its own internal understanding of my image"}, {"start": 263.68, "end": 269.68, "text": " if you will. This is great information and is something that we can only experience if we"}, {"start": 269.68, "end": 275.84000000000003, "text": " have hands-on experience with the algorithm. And now let's use this new technique to apply some"}, {"start": 275.84000000000003, "end": 284.48, "text": " more magic to this image. This is where the goodness happens. And, oh my, it does not disappoint."}, {"start": 284.48, "end": 291.28000000000003, "text": " You see, it can gradually transform me into Obi-Wan Kenobi, an elegant algorithm for a more civilized"}, {"start": 291.28, "end": 297.84, "text": " age. But that's not all. It can also create a ginger caroy, hippie caroy,"}, {"start": 299.35999999999996, "end": 306.32, "text": " caroy who found a shiny new paper on fluid simulations, and caroy who read set paper outside"}, {"start": 306.32, "end": 313.84, "text": " for quite a while and was perhaps disappointed. And now hold on to your papers and please welcome"}, {"start": 313.84, "end": 323.44, "text": " Dr. Karolina Jean-Eiffahier. And one more, I apologize in advance, rockstar caroy with a Mohawk."}, {"start": 324.64, "end": 330.71999999999997, "text": " How cool is that? I would like to send a huge thank you to the authors for taking their time"}, {"start": 330.71999999999997, "end": 336.71999999999997, "text": " out of their workday to create these images only for us. You can really only see this here"}, {"start": 336.71999999999997, "end": 342.0, "text": " on two-minute papers. And as you see, the piece of progress in machine learning research is"}, {"start": 342.0, "end": 348.08, "text": " absolutely stunning. And with this, the limit of our artistic workflow is not going to be our"}, {"start": 348.08, "end": 354.56, "text": " mechanical skills, but only our imagination. What a time to be alive! This video has been"}, {"start": 354.56, "end": 360.88, "text": " supported by weights and biases. Check out the recent offering fully connected, a place where"}, {"start": 360.88, "end": 366.56, "text": " they bring machine learning practitioners together to share and discuss their ideas,"}, {"start": 366.56, "end": 373.04, "text": " learn from industry leaders, and even collaborate on projects together. You see, I get messages"}, {"start": 373.04, "end": 378.96, "text": " from you fellow scholars telling me that you have been inspired by the series, but don't really"}, {"start": 378.96, "end": 385.92, "text": " know where to start. And here it is. Fully connected is a great way to learn about the fundamentals,"}, {"start": 385.92, "end": 392.64, "text": " how to reproduce experiments, get your papers accepted to a conference, and more. Make sure to"}, {"start": 392.64, "end": 399.2, "text": " visit them through wnb.me slash papers or just click the link in the video description."}, {"start": 399.2, "end": 404.71999999999997, "text": " Our thanks to weights and biases for their long-standing support and for helping us make better"}, {"start": 404.72, "end": 425.04, "text": " videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=iXqLTJFTUGc
AI Makes Near-Perfect DeepFakes in 40 Seconds! 👨
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Iterative Text-based Editing of Talking-heads Using Neural Retargeting" is available here: https://davidyao.me/projects/text2vid/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Thumbnail background image credit: École polytechnique - J.Barande - https://www.flickr.com/photos/117994717@N06/36055906023 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #deepfake
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fahir. Imagine that you are a film critic and you are recording a video review of a movie, but unfortunately you are not the best kind of movie critic and you record it before watching the movie. But here is the problem, you don't really know if it's going to be any good. So you record this. I'm gonna give her a territory of B-. So far so good. Nothing too crazy going on here. However, you go in, watch the movie and it turns out to be amazing. So what do we do if we don't have time to re-record the video? Well, we grab this AI, type in the new text and it will give us this. I'm gonna give her a territory of A plus. Whoa! What just happened? What kind of black magic is this? Well, let's look behind the person. On the blackboard you see some delicious partial derivatives. And I am starting to think that this person is not a movie critic. And of course he isn't because this is Yoshua Benjiro, a legendary machine learning researcher. And this was an introduction video where he says this. And what happened is that it has been repurposed by this new D-Fake generator AI where we can type in anything we wish and outcomes a near-perfect result. It synthesizes both the video and audio content for us. But we are not quite done yet. Something is missing. If the movie gets an A plus, the gestures of the subject also have to reflect that this is a favorable review. So what do we do? Maybe add the smile there. Is that possible? I am going to give her a redatory an A plus. Oh yes, there we go. Amazing. Let's have a closer look at one more example where we can see how easily we can drop in new text with this editor. Why you don't worry about city items? Marco movies are not cinema. Now, this is not the first method performing this task. Previous techniques typically required hours and hours of video of a target subject. So how much training data does this require to perform all this? Well, let's have a look together. Look, this is not the same footage copy pasted three times. This is a synthesized video output if we have 10 minutes of video data from the test subject. This looks nearly as good as fewer sharp details, but in return, this requires only two and a half minutes. And here comes the best part. If you look here, you may be able to see the difference. And if you have been holding onto your paper so far, now squeeze that paper because synthesizing this only required 30 seconds of video footage of the target subject. My goodness, but we are not nearly done yet. It can do more. For instance, it can tone up or down the intensity of gestures to match the tone of what is being said. Look, so how does this wizardry happen? Well, this new technique improves two things really well. One is that it can search for phonemes and other units better. Here is an example, we crossed out the word spider and we wish to use the word fox instead and it tries to assemble this word from previous occurrences of individual sounds. For instance, the ox part is available when the test subject orders the word box. And two, it can stitch them together better than previous methods. And surely, this means that since it needs less data, the synthesis must take a great deal longer. Right? No, not at all. The synthesis part only takes 40 seconds. And even if it couldn't do this so quickly, the performance control aspect where we can tone the gestures up or down or at the smile would still be an amazing selling point in and of itself. But no, it does all of these things quickly and with high quality at the same time. Wow, I now invite you to look at the results carefully and give them a hard time. Did you find anything out of ordinary? Did you find this believable? Let me know in the comments below. The authors of the paper also conducted a user study with 110 participants who were asked to look at 25 videos and say which one they felt was real. The results showed that the new technique outperforms previous techniques even if they have access to 12 times more training data. Which is absolutely amazing, but what is even better, the longer the video clips were, the better this method fared. What a time to be alive. Now, of course, beyond the many amazing use cases of deepfakes in reviving deceased actors, creating beautiful visual art, redubbing movies and more, we have to be vigilant about the fact that they can also be used for nefarious purposes. The goal of this video is to let you and the public know that these deepfakes can now be created quickly and inexpensively and they don't require a trained scientist anymore. If this can be done, it is of utmost importance that we all know about it. And beyond that, whenever they invite me, I inform key political and military decision makers about the existence and details of these techniques to make sure that they also know about these and using that knowledge, they can make better decisions for us. You can see me doing that here. Note that these talks and consultations all happen free of charge and if they keep inviting me, I'll keep showing up to help with this in the future as a service to the public. PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fahir."}, {"start": 4.64, "end": 9.92, "text": " Imagine that you are a film critic and you are recording a video review of a movie,"}, {"start": 9.92, "end": 17.12, "text": " but unfortunately you are not the best kind of movie critic and you record it before watching the movie."}, {"start": 17.12, "end": 22.32, "text": " But here is the problem, you don't really know if it's going to be any good."}, {"start": 22.32, "end": 24.560000000000002, "text": " So you record this."}, {"start": 24.560000000000002, "end": 28.0, "text": " I'm gonna give her a territory of B-."}, {"start": 28.0, "end": 32.0, "text": " So far so good. Nothing too crazy going on here."}, {"start": 32.0, "end": 37.68, "text": " However, you go in, watch the movie and it turns out to be amazing."}, {"start": 37.68, "end": 42.16, "text": " So what do we do if we don't have time to re-record the video?"}, {"start": 42.16, "end": 48.0, "text": " Well, we grab this AI, type in the new text and it will give us this."}, {"start": 48.0, "end": 51.44, "text": " I'm gonna give her a territory of A plus."}, {"start": 51.44, "end": 54.72, "text": " Whoa! What just happened?"}, {"start": 54.72, "end": 57.36, "text": " What kind of black magic is this?"}, {"start": 57.36, "end": 59.68, "text": " Well, let's look behind the person."}, {"start": 59.68, "end": 63.36, "text": " On the blackboard you see some delicious partial derivatives."}, {"start": 63.36, "end": 67.36, "text": " And I am starting to think that this person is not a movie critic."}, {"start": 68.16, "end": 74.88, "text": " And of course he isn't because this is Yoshua Benjiro, a legendary machine learning researcher."}, {"start": 75.44, "end": 78.96000000000001, "text": " And this was an introduction video where he says this."}, {"start": 79.92, "end": 85.68, "text": " And what happened is that it has been repurposed by this new D-Fake generator AI"}, {"start": 85.68, "end": 90.88000000000001, "text": " where we can type in anything we wish and outcomes a near-perfect result."}, {"start": 91.52000000000001, "end": 96.08000000000001, "text": " It synthesizes both the video and audio content for us."}, {"start": 96.08000000000001, "end": 98.24000000000001, "text": " But we are not quite done yet."}, {"start": 98.24000000000001, "end": 99.60000000000001, "text": " Something is missing."}, {"start": 100.16000000000001, "end": 105.76, "text": " If the movie gets an A plus, the gestures of the subject also have to reflect that this is a"}, {"start": 105.76, "end": 111.2, "text": " favorable review. So what do we do? Maybe add the smile there."}, {"start": 111.92000000000002, "end": 112.88000000000001, "text": " Is that possible?"}, {"start": 112.88, "end": 117.19999999999999, "text": " I am going to give her a redatory an A plus."}, {"start": 119.44, "end": 122.16, "text": " Oh yes, there we go. Amazing."}, {"start": 122.96, "end": 128.56, "text": " Let's have a closer look at one more example where we can see how easily we can drop in new"}, {"start": 128.56, "end": 129.76, "text": " text with this editor."}, {"start": 130.8, "end": 133.12, "text": " Why you don't worry about city items?"}, {"start": 136.48, "end": 138.32, "text": " Marco movies are not cinema."}, {"start": 138.32, "end": 142.32, "text": " Now, this is not the first method performing this task."}, {"start": 142.32, "end": 147.76, "text": " Previous techniques typically required hours and hours of video of a target subject."}, {"start": 147.76, "end": 152.48, "text": " So how much training data does this require to perform all this?"}, {"start": 152.48, "end": 154.95999999999998, "text": " Well, let's have a look together."}, {"start": 154.95999999999998, "end": 159.2, "text": " Look, this is not the same footage copy pasted three times."}, {"start": 159.2, "end": 165.92, "text": " This is a synthesized video output if we have 10 minutes of video data from the test subject."}, {"start": 165.92, "end": 176.39999999999998, "text": " This looks nearly as good as fewer sharp details, but in return, this requires only two and a half minutes."}, {"start": 176.39999999999998, "end": 179.83999999999997, "text": " And here comes the best part."}, {"start": 179.83999999999997, "end": 182.48, "text": " If you look here, you may be able to see the difference."}, {"start": 182.48, "end": 189.51999999999998, "text": " And if you have been holding onto your paper so far, now squeeze that paper because synthesizing this"}, {"start": 189.51999999999998, "end": 194.0, "text": " only required 30 seconds of video footage of the target subject."}, {"start": 194.0, "end": 198.24, "text": " My goodness, but we are not nearly done yet."}, {"start": 198.24, "end": 199.68, "text": " It can do more."}, {"start": 199.68, "end": 207.28, "text": " For instance, it can tone up or down the intensity of gestures to match the tone of what is being said."}, {"start": 207.28, "end": 211.36, "text": " Look, so how does this wizardry happen?"}, {"start": 212.08, "end": 216.0, "text": " Well, this new technique improves two things really well."}, {"start": 216.0, "end": 220.0, "text": " One is that it can search for phonemes and other units better."}, {"start": 220.0, "end": 226.64, "text": " Here is an example, we crossed out the word spider and we wish to use the word fox instead"}, {"start": 226.64, "end": 231.76, "text": " and it tries to assemble this word from previous occurrences of individual sounds."}, {"start": 232.32, "end": 237.52, "text": " For instance, the ox part is available when the test subject orders the word box."}, {"start": 238.24, "end": 242.4, "text": " And two, it can stitch them together better than previous methods."}, {"start": 243.2, "end": 249.76, "text": " And surely, this means that since it needs less data, the synthesis must take a great deal longer."}, {"start": 249.76, "end": 252.64, "text": " Right? No, not at all."}, {"start": 252.64, "end": 255.6, "text": " The synthesis part only takes 40 seconds."}, {"start": 256.32, "end": 261.59999999999997, "text": " And even if it couldn't do this so quickly, the performance control aspect where we can tone the"}, {"start": 261.59999999999997, "end": 268.32, "text": " gestures up or down or at the smile would still be an amazing selling point in and of itself."}, {"start": 268.88, "end": 275.2, "text": " But no, it does all of these things quickly and with high quality at the same time."}, {"start": 275.2, "end": 281.92, "text": " Wow, I now invite you to look at the results carefully and give them a hard time."}, {"start": 282.32, "end": 284.4, "text": " Did you find anything out of ordinary?"}, {"start": 284.4, "end": 288.15999999999997, "text": " Did you find this believable? Let me know in the comments below."}, {"start": 288.15999999999997, "end": 295.12, "text": " The authors of the paper also conducted a user study with 110 participants who were asked to"}, {"start": 295.12, "end": 299.36, "text": " look at 25 videos and say which one they felt was real."}, {"start": 299.36, "end": 308.32, "text": " The results showed that the new technique outperforms previous techniques even if they have access to 12 times more training data."}, {"start": 308.32, "end": 316.08000000000004, "text": " Which is absolutely amazing, but what is even better, the longer the video clips were, the better this method fared."}, {"start": 316.08000000000004, "end": 318.0, "text": " What a time to be alive."}, {"start": 318.0, "end": 324.48, "text": " Now, of course, beyond the many amazing use cases of deepfakes in reviving deceased actors,"}, {"start": 324.48, "end": 333.44, "text": " creating beautiful visual art, redubbing movies and more, we have to be vigilant about the fact that they can also be used for nefarious purposes."}, {"start": 333.44, "end": 343.92, "text": " The goal of this video is to let you and the public know that these deepfakes can now be created quickly and inexpensively and they don't require a trained scientist anymore."}, {"start": 343.92, "end": 348.96000000000004, "text": " If this can be done, it is of utmost importance that we all know about it."}, {"start": 348.96, "end": 364.96, "text": " And beyond that, whenever they invite me, I inform key political and military decision makers about the existence and details of these techniques to make sure that they also know about these and using that knowledge, they can make better decisions for us."}, {"start": 364.96, "end": 367.2, "text": " You can see me doing that here."}, {"start": 367.2, "end": 377.76, "text": " Note that these talks and consultations all happen free of charge and if they keep inviting me, I'll keep showing up to help with this in the future as a service to the public."}, {"start": 377.76, "end": 385.76, "text": " PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible."}, {"start": 385.76, "end": 395.76, "text": " This gives you a faster way to build out models with more transparency into how your model is architected, how it performs and how to debug it."}, {"start": 395.76, "end": 399.76, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 399.76, "end": 409.76, "text": " It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically."}, {"start": 409.76, "end": 415.76, "text": " I only wish I had a tool like this when I was working on my neural networks during my PhD years."}, {"start": 415.76, "end": 422.76, "text": " Visit perceptiLabs.com slash papers to easily install the free local version of their system today."}, {"start": 422.76, "end": 432.76, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=2jwVDRKKDME
Burning Down Virtual Trees... In Real Time! 🌲🔥
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/authors/adv-dl/reports/An-Introduction-to-Adversarial-Examples-in-Deep-Learning--VmlldzoyMTQwODM 📝 The paper "Interactive Wood Combustion for Botanical Tree Models" is available here: https://repository.kaust.edu.sa/bitstream/10754/626814/1/a197-pirk.pdf https://github.com/art049/InteractiveWoodCombustion 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev #physics
Dear Fellow Scholars, this is Two Minute Papers with Dr. Carlos John A. Feher. Today we are going to burn some virtual trees. This is a fantastic computer graphics paper from four years ago. I ask you to hold onto your papers immediately and do not get surprised if it spontaneously lights on fire. Yes, this work is about simulating wood combustion and it is one of my favorite kinds of papers that takes an extremely narrow task and absolutely nails it. Everything we can possibly ask for from such assimilation is there. Each leaf has its own individual mass and area, they burn individually, transfer heat to their surroundings, and finally, branches bend and look can eventually even break in this process. If we look under the hood, we see that these trees are defined as a system of connected particles embedded within a physics simulator. These particles have their own properties, for instance, you see the temperature changes here at different regions of the tree as the fire gradually consumes it. Now, if you have been holding onto your papers, squeeze that paper and look. What do you think is this fire movement pre-programmed? It doesn't seem like it. This seems more like some real-time mouse movement, which is great news indeed. And yes, that means that this simulation and all the interactions we can do with it runs in real time. Here is a list of the many quantities it can simulate. Oh my goodness, there's so much yummy physics here. I don't even know where to start. Let's pick the water content here and see how changing it would look. This is a tree with a lower water content. It catches fire rather easily. And now, let's pour some rain on it. Then afterwards, look, it becomes much more difficult to light on fire and emits huge plumes of dense, dense smoke. And we can even play with these parameters in real time. We can also have a ton of fun by choosing non-physical parameters for the breaking coefficient, which of course can lead to the tree suddenly falling apart in a non-physical way. The cool thing here is that we can either set these parameters to physically plausible values and get a really realistic simulation or we can choose to bend reality in directions that are in line with our artistic vision. How cool is that? I could play with this all day. So, as an experienced scholar, you ask, OK, this looks great, but how good are these simulations really? Are they just good enough to fool the entry in die or are they indeed close to reality? I hope you know what's coming because what is coming is my favorite part in all simulation research and that is when we let reality be our judge and compare the simulation to that. This is a piece of real footage of a piece of burning wood and this is the simulation. Well, we see that the resolution of the fire simulation was a little limited. It was four years ago after all, however, it runs very similarly to the real life footage. Bravo! And all this was done in 2017. What a time to be alive! But we are not even close to be done yet. This paper teaches us one more important lesson. After publishing such an incredible work, it was accepted to the Cigraf Asia 2017 conference. That is, one of the most prestigious conferences in this research field. Getting a paper accepted here is equivalent to winning the Olympic gold medal of computer graphics research. So, with that, we would expect that the authors now revel in eternal glory. Right? Well, let's see. What? Is this serious? The original video was seen by less than a thousand people online. How can that be? And the paper was referred to only ten times by other works in these four years. Now, you see, that it is not so bad in computer graphics at all. It is an order, maybe even orders of magnitude smaller field than machine learning. But I think this is an excellent demonstration of why I started this series. And it is because I get so excited by these incredible human achievements and I feel that they deserve a little more love than they are given. And of course, these are so amazing. Everybody has to know about them. Happy to have you fellow scholars watching this and celebrating these papers with me for more than 500 episodes now. Thank you so much. It is a true honor to have such an amazing and receptive audience. This episode has been supported by weights and biases. In this post, they show you how to use their tool to fool a neural network to look at a pig and be really sure that it is an airliner. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets. And it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Carlos John A. Feher."}, {"start": 4.76, "end": 7.88, "text": " Today we are going to burn some virtual trees."}, {"start": 7.88, "end": 12.32, "text": " This is a fantastic computer graphics paper from four years ago."}, {"start": 12.32, "end": 19.96, "text": " I ask you to hold onto your papers immediately and do not get surprised if it spontaneously lights on fire."}, {"start": 19.96, "end": 29.04, "text": " Yes, this work is about simulating wood combustion and it is one of my favorite kinds of papers that takes an extremely narrow task"}, {"start": 29.04, "end": 31.64, "text": " and absolutely nails it."}, {"start": 31.64, "end": 36.36, "text": " Everything we can possibly ask for from such assimilation is there."}, {"start": 36.36, "end": 45.08, "text": " Each leaf has its own individual mass and area, they burn individually, transfer heat to their surroundings,"}, {"start": 45.08, "end": 54.239999999999995, "text": " and finally, branches bend and look can eventually even break in this process."}, {"start": 54.24, "end": 64.64, "text": " If we look under the hood, we see that these trees are defined as a system of connected particles embedded within a physics simulator."}, {"start": 64.64, "end": 75.64, "text": " These particles have their own properties, for instance, you see the temperature changes here at different regions of the tree as the fire gradually consumes it."}, {"start": 75.64, "end": 81.44, "text": " Now, if you have been holding onto your papers, squeeze that paper and look."}, {"start": 81.44, "end": 86.64, "text": " What do you think is this fire movement pre-programmed?"}, {"start": 86.64, "end": 88.64, "text": " It doesn't seem like it."}, {"start": 88.64, "end": 93.44, "text": " This seems more like some real-time mouse movement, which is great news indeed."}, {"start": 93.44, "end": 103.44, "text": " And yes, that means that this simulation and all the interactions we can do with it runs in real time."}, {"start": 103.44, "end": 107.03999999999999, "text": " Here is a list of the many quantities it can simulate."}, {"start": 107.03999999999999, "end": 110.8, "text": " Oh my goodness, there's so much yummy physics here."}, {"start": 110.8, "end": 113.6, "text": " I don't even know where to start."}, {"start": 113.6, "end": 118.6, "text": " Let's pick the water content here and see how changing it would look."}, {"start": 118.6, "end": 122.0, "text": " This is a tree with a lower water content."}, {"start": 122.0, "end": 126.8, "text": " It catches fire rather easily."}, {"start": 126.8, "end": 129.8, "text": " And now, let's pour some rain on it."}, {"start": 129.8, "end": 139.6, "text": " Then afterwards, look, it becomes much more difficult to light on fire and emits huge plumes of dense, dense smoke."}, {"start": 139.6, "end": 145.4, "text": " And we can even play with these parameters in real time."}, {"start": 145.4, "end": 151.0, "text": " We can also have a ton of fun by choosing non-physical parameters for the breaking coefficient,"}, {"start": 151.0, "end": 157.2, "text": " which of course can lead to the tree suddenly falling apart in a non-physical way."}, {"start": 157.2, "end": 162.6, "text": " The cool thing here is that we can either set these parameters to physically plausible values"}, {"start": 162.6, "end": 171.4, "text": " and get a really realistic simulation or we can choose to bend reality in directions that are in line with our artistic vision."}, {"start": 171.4, "end": 173.4, "text": " How cool is that?"}, {"start": 173.4, "end": 176.4, "text": " I could play with this all day."}, {"start": 176.4, "end": 179.4, "text": " So, as an experienced scholar, you ask,"}, {"start": 179.4, "end": 184.6, "text": " OK, this looks great, but how good are these simulations really?"}, {"start": 184.6, "end": 191.0, "text": " Are they just good enough to fool the entry in die or are they indeed close to reality?"}, {"start": 191.0, "end": 197.4, "text": " I hope you know what's coming because what is coming is my favorite part in all simulation research"}, {"start": 197.4, "end": 203.0, "text": " and that is when we let reality be our judge and compare the simulation to that."}, {"start": 203.0, "end": 210.4, "text": " This is a piece of real footage of a piece of burning wood and this is the simulation."}, {"start": 210.4, "end": 215.0, "text": " Well, we see that the resolution of the fire simulation was a little limited."}, {"start": 215.0, "end": 222.0, "text": " It was four years ago after all, however, it runs very similarly to the real life footage."}, {"start": 222.0, "end": 226.4, "text": " Bravo! And all this was done in 2017."}, {"start": 226.4, "end": 228.4, "text": " What a time to be alive!"}, {"start": 228.4, "end": 231.2, "text": " But we are not even close to be done yet."}, {"start": 231.2, "end": 234.6, "text": " This paper teaches us one more important lesson."}, {"start": 234.6, "end": 241.6, "text": " After publishing such an incredible work, it was accepted to the Cigraf Asia 2017 conference."}, {"start": 241.6, "end": 245.79999999999998, "text": " That is, one of the most prestigious conferences in this research field."}, {"start": 245.79999999999998, "end": 252.2, "text": " Getting a paper accepted here is equivalent to winning the Olympic gold medal of computer graphics research."}, {"start": 252.2, "end": 258.4, "text": " So, with that, we would expect that the authors now revel in eternal glory."}, {"start": 258.4, "end": 259.6, "text": " Right?"}, {"start": 259.6, "end": 262.0, "text": " Well, let's see."}, {"start": 262.0, "end": 263.0, "text": " What?"}, {"start": 263.0, "end": 264.6, "text": " Is this serious?"}, {"start": 264.6, "end": 269.8, "text": " The original video was seen by less than a thousand people online."}, {"start": 269.8, "end": 271.8, "text": " How can that be?"}, {"start": 271.8, "end": 277.40000000000003, "text": " And the paper was referred to only ten times by other works in these four years."}, {"start": 277.40000000000003, "end": 281.6, "text": " Now, you see, that it is not so bad in computer graphics at all."}, {"start": 281.6, "end": 287.40000000000003, "text": " It is an order, maybe even orders of magnitude smaller field than machine learning."}, {"start": 287.40000000000003, "end": 292.2, "text": " But I think this is an excellent demonstration of why I started this series."}, {"start": 292.2, "end": 297.0, "text": " And it is because I get so excited by these incredible human achievements"}, {"start": 297.0, "end": 300.8, "text": " and I feel that they deserve a little more love than they are given."}, {"start": 300.8, "end": 303.4, "text": " And of course, these are so amazing."}, {"start": 303.4, "end": 305.6, "text": " Everybody has to know about them."}, {"start": 305.6, "end": 308.2, "text": " Happy to have you fellow scholars watching this"}, {"start": 308.2, "end": 313.2, "text": " and celebrating these papers with me for more than 500 episodes now."}, {"start": 313.2, "end": 314.8, "text": " Thank you so much."}, {"start": 314.8, "end": 319.4, "text": " It is a true honor to have such an amazing and receptive audience."}, {"start": 319.4, "end": 322.8, "text": " This episode has been supported by weights and biases."}, {"start": 322.8, "end": 327.6, "text": " In this post, they show you how to use their tool to fool a neural network"}, {"start": 327.6, "end": 332.0, "text": " to look at a pig and be really sure that it is an airliner."}, {"start": 332.0, "end": 335.2, "text": " If you work with learning algorithms on a regular basis,"}, {"start": 335.2, "end": 337.8, "text": " make sure to check out weights and biases."}, {"start": 337.8, "end": 341.2, "text": " Their system is designed to help you organize your experiments"}, {"start": 341.2, "end": 346.40000000000003, "text": " and it is so good it could shave off weeks or even months of work from your projects"}, {"start": 346.40000000000003, "end": 352.0, "text": " and is completely free for all individuals, academics and open source projects."}, {"start": 352.0, "end": 354.4, "text": " This really is as good as it gets."}, {"start": 354.4, "end": 358.6, "text": " And it is hardly a surprise that they are now used by over 200 companies"}, {"start": 358.6, "end": 360.4, "text": " and research institutions."}, {"start": 360.4, "end": 364.6, "text": " Make sure to visit them through wnb.com slash papers"}, {"start": 364.6, "end": 367.0, "text": " or just click the link in the video description"}, {"start": 367.0, "end": 369.2, "text": " and you can get a free demo today."}, {"start": 369.2, "end": 372.6, "text": " Our thanks to weights and biases for their longstanding support"}, {"start": 372.6, "end": 375.4, "text": " and for helping us make better videos for you."}, {"start": 375.4, "end": 377.6, "text": " Thanks for watching and for your generous support"}, {"start": 377.6, "end": 382.6, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=gfMyGad1Gmc
5 Fiber-Like Tools That Can Now Be 3D-Printed!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/authors/text-recognition-crnn-ctc/reports/Text-Recognition-With-CRNN-CTC-Network--VmlldzoxNTI5NDI 📝 The paper "Freely orientable microstructures for designing deformable 3D prints" and the Shadertoy implementation are available here: - https://hal.inria.fr/hal-02524371 - https://www.shadertoy.com/view/WtjfzW 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Kato Zsolnai-Fehir. I would like to show you some results from the area of 3D printing, a topic which is, I think, a little overlooked and show you that works in this field are improving at an incredible pace. Now, a common theme among research papers in this area is that they typically allow us to design objects and materials by thinking about how they should look. Let's see if this is really true by applying the second law of papers, which says whatever you're thinking about, there is already a two-minute papers episode on that. Let's see if it applies here. For instance, just prescribing a shape for 3D printing is old old news. Here is a previous technique that is able to print exotic materials. These are materials that we can start stretching and if we do, instead of thinning, they get fatter. We can also 3D print filigree patterns with ease. These are detailed thin patterns typically found in jewelry, fabrics and ornaments, and as you may imagine, crafting such motives on objects would be incredibly laborious to do by hand. We can also prescribe an image and 3D print an object that will cast a caustic pattern that shows exactly that image. Beautiful! And printing textured 3D objects in a number of different ways is also possible. This is called hydrographic printing and is one of the most flamboyant ways of doing that. So, what happens here? Well, we place a film in water, use a chemical activator spray on it, and shove the object in the water, and oh yes, there we go. Note that these were all showcased in previous episodes of this series. So, in 3D printing, we typically design things by how they should look. Of course, how else would we be designing? Well, the authors of this crazy paper don't care about locks at all. Well, what else would they care about if not the locks? Get this, they care about how these objects deform. Yes, with this work, we can design deformations, and the algorithm will find out what the orientation of the fibers should be to create a prescribed effect. Okay, but what does this really mean? This means that we can now 3D print really cool, fiber-like microstructures that deform well from one direction. In other words, they can be smashed easily and flattened a great deal during that process. I bet there was a ton of fun to be had at the lab on this day. However, research is not only fun and joy, look, if we turn this object around... Ouch! This side is very rigid and resists deformations well, so there was probably a lot of injuries in the lab that day too. So, clearly, this is really cool. But of course, our question is, what is all this good for? Is this really just an interesting experiment, or is this thing really useful? Well, let's see what this paper has to offer in 5 amazing experiments. Experiment number one, pliers. The jaws and the hand grips are supposed to be very rigid, checkmark. However, there needs to be a joint between them to allow us to operate it. This joint needs to be deformable and not any kind of deformable, but exactly the right kind of deformable to make sure it opens and closes properly. Lovely. Loving this one. 3D printing pliers from fiber-like structures. How cool is that? Experiment number two, structured plates. This shows that not all sides have to have the same properties. We can also print a material which has rigid and flexible parts on the same side, a few inches apart, thus introducing interesting directional bending characteristics. For instance, this one shows a strong collapsing behavior and can grip our finger at the same time. Experiment number three, bendy plates. We can even design structures where one side absorbs deformations while the other one transfers it forward, bending the whole structure. 4. Seat-like structures. The seat surface is designed to deform a little more to create a comfy sensation, but the rest of the seat has to be rigid to not collapse and last a long time. And finally, example number five, knee-like structures. These freely collapse in this direction to allow movement. However, there resist forces from any other direction. And these are really just some rudimentary examples of what this method can do, but the structures showcased here could be used in soft robotics, soft mechanisms, prosthetics, and even more areas. The main challenge of this work is creating an algorithm that can deal with these breaking patterns, which make for an object that is nearly impossible to manufacture. However, this method can not only eliminate these, but it can design structures that can be manufactured on low-end 3D printers, and it also uses inexpensive materials to accomplish that. And hold on to your papers because this work showcases a handcrafted technique to perform all this. Not a learning algorithm insight, and there are two more things that I really liked in this paper. One is that these proposed structures collapse way better than this previous method, and not only the source code of this project is available, but it is available for you to try on one of the best websites on the entirety of the internet, shader toy. So good! So, I hope you now agree that the field of 3D printing research is improving at an incredible pace, and I hope that you also had some fun learning about it. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their tool to extract text with all kinds of sizes, shapes, and orientations from your images. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories, and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Kato Zsolnai-Fehir."}, {"start": 4.64, "end": 10.88, "text": " I would like to show you some results from the area of 3D printing, a topic which is, I think,"}, {"start": 10.88, "end": 16.88, "text": " a little overlooked and show you that works in this field are improving at an incredible pace."}, {"start": 16.88, "end": 24.76, "text": " Now, a common theme among research papers in this area is that they typically allow us to design objects and materials"}, {"start": 24.76, "end": 27.76, "text": " by thinking about how they should look."}, {"start": 27.76, "end": 35.6, "text": " Let's see if this is really true by applying the second law of papers, which says whatever you're thinking about,"}, {"start": 35.6, "end": 39.36, "text": " there is already a two-minute papers episode on that."}, {"start": 39.36, "end": 41.52, "text": " Let's see if it applies here."}, {"start": 41.52, "end": 47.120000000000005, "text": " For instance, just prescribing a shape for 3D printing is old old news."}, {"start": 47.120000000000005, "end": 52.120000000000005, "text": " Here is a previous technique that is able to print exotic materials."}, {"start": 52.12, "end": 60.12, "text": " These are materials that we can start stretching and if we do, instead of thinning, they get fatter."}, {"start": 60.12, "end": 63.72, "text": " We can also 3D print filigree patterns with ease."}, {"start": 63.72, "end": 69.47999999999999, "text": " These are detailed thin patterns typically found in jewelry, fabrics and ornaments,"}, {"start": 69.47999999999999, "end": 77.24, "text": " and as you may imagine, crafting such motives on objects would be incredibly laborious to do by hand."}, {"start": 77.24, "end": 87.08, "text": " We can also prescribe an image and 3D print an object that will cast a caustic pattern that shows exactly that image."}, {"start": 87.08, "end": 89.32, "text": " Beautiful!"}, {"start": 89.32, "end": 95.56, "text": " And printing textured 3D objects in a number of different ways is also possible."}, {"start": 95.56, "end": 102.36, "text": " This is called hydrographic printing and is one of the most flamboyant ways of doing that."}, {"start": 102.36, "end": 105.56, "text": " So, what happens here?"}, {"start": 105.56, "end": 115.4, "text": " Well, we place a film in water, use a chemical activator spray on it, and shove the object in the water,"}, {"start": 115.4, "end": 122.28, "text": " and oh yes, there we go."}, {"start": 122.28, "end": 126.84, "text": " Note that these were all showcased in previous episodes of this series."}, {"start": 126.84, "end": 132.52, "text": " So, in 3D printing, we typically design things by how they should look."}, {"start": 132.52, "end": 135.64000000000001, "text": " Of course, how else would we be designing?"}, {"start": 135.64000000000001, "end": 140.76000000000002, "text": " Well, the authors of this crazy paper don't care about locks at all."}, {"start": 140.76000000000002, "end": 144.60000000000002, "text": " Well, what else would they care about if not the locks?"}, {"start": 144.60000000000002, "end": 148.92000000000002, "text": " Get this, they care about how these objects deform."}, {"start": 148.92000000000002, "end": 159.24, "text": " Yes, with this work, we can design deformations, and the algorithm will find out what the orientation of the fibers should be to create a prescribed effect."}, {"start": 159.24, "end": 162.68, "text": " Okay, but what does this really mean?"}, {"start": 162.68, "end": 170.28, "text": " This means that we can now 3D print really cool, fiber-like microstructures that deform well from one direction."}, {"start": 170.28, "end": 176.28, "text": " In other words, they can be smashed easily and flattened a great deal during that process."}, {"start": 176.28, "end": 180.36, "text": " I bet there was a ton of fun to be had at the lab on this day."}, {"start": 180.36, "end": 187.96, "text": " However, research is not only fun and joy, look, if we turn this object around..."}, {"start": 187.96, "end": 192.84, "text": " Ouch! This side is very rigid and resists deformations well,"}, {"start": 192.84, "end": 196.52, "text": " so there was probably a lot of injuries in the lab that day too."}, {"start": 197.32, "end": 200.12, "text": " So, clearly, this is really cool."}, {"start": 200.12, "end": 203.56, "text": " But of course, our question is, what is all this good for?"}, {"start": 204.28, "end": 208.92000000000002, "text": " Is this really just an interesting experiment, or is this thing really useful?"}, {"start": 209.72, "end": 214.20000000000002, "text": " Well, let's see what this paper has to offer in 5 amazing experiments."}, {"start": 214.2, "end": 217.72, "text": " Experiment number one, pliers."}, {"start": 217.72, "end": 222.76, "text": " The jaws and the hand grips are supposed to be very rigid, checkmark."}, {"start": 222.76, "end": 227.72, "text": " However, there needs to be a joint between them to allow us to operate it."}, {"start": 227.72, "end": 232.76, "text": " This joint needs to be deformable and not any kind of deformable,"}, {"start": 232.76, "end": 238.83999999999997, "text": " but exactly the right kind of deformable to make sure it opens and closes properly."}, {"start": 238.84, "end": 244.28, "text": " Lovely. Loving this one. 3D printing pliers from fiber-like structures."}, {"start": 245.0, "end": 250.04, "text": " How cool is that? Experiment number two, structured plates."}, {"start": 250.84, "end": 254.52, "text": " This shows that not all sides have to have the same properties."}, {"start": 254.52, "end": 260.04, "text": " We can also print a material which has rigid and flexible parts on the same side,"}, {"start": 260.04, "end": 265.96, "text": " a few inches apart, thus introducing interesting directional bending characteristics."}, {"start": 265.96, "end": 273.15999999999997, "text": " For instance, this one shows a strong collapsing behavior and can grip our finger at the same time."}, {"start": 275.96, "end": 278.91999999999996, "text": " Experiment number three, bendy plates."}, {"start": 279.56, "end": 285.64, "text": " We can even design structures where one side absorbs deformations while the other one transfers"}, {"start": 285.64, "end": 288.28, "text": " it forward, bending the whole structure."}, {"start": 288.28, "end": 299.96, "text": " 4. Seat-like structures. The seat surface is designed to deform a little more"}, {"start": 299.96, "end": 307.4, "text": " to create a comfy sensation, but the rest of the seat has to be rigid to not collapse and last a long time."}, {"start": 309.88, "end": 313.96, "text": " And finally, example number five, knee-like structures."}, {"start": 313.96, "end": 317.71999999999997, "text": " These freely collapse in this direction to allow movement."}, {"start": 319.79999999999995, "end": 323.08, "text": " However, there resist forces from any other direction."}, {"start": 323.08, "end": 328.12, "text": " And these are really just some rudimentary examples of what this method can do,"}, {"start": 328.12, "end": 334.12, "text": " but the structures showcased here could be used in soft robotics, soft mechanisms,"}, {"start": 334.12, "end": 336.52, "text": " prosthetics, and even more areas."}, {"start": 336.52, "end": 341.96, "text": " The main challenge of this work is creating an algorithm that can deal with these breaking patterns,"}, {"start": 341.96, "end": 346.44, "text": " which make for an object that is nearly impossible to manufacture."}, {"start": 347.08, "end": 352.2, "text": " However, this method can not only eliminate these, but it can design structures that can be"}, {"start": 352.2, "end": 358.76, "text": " manufactured on low-end 3D printers, and it also uses inexpensive materials to accomplish that."}, {"start": 359.4, "end": 365.64, "text": " And hold on to your papers because this work showcases a handcrafted technique to perform all this."}, {"start": 365.64, "end": 372.28, "text": " Not a learning algorithm insight, and there are two more things that I really liked in this paper."}, {"start": 372.91999999999996, "end": 377.8, "text": " One is that these proposed structures collapse way better than this previous method,"}, {"start": 378.44, "end": 384.36, "text": " and not only the source code of this project is available, but it is available for you to try"}, {"start": 384.36, "end": 388.76, "text": " on one of the best websites on the entirety of the internet, shader toy."}, {"start": 390.2, "end": 390.84, "text": " So good!"}, {"start": 390.84, "end": 398.03999999999996, "text": " So, I hope you now agree that the field of 3D printing research is improving at an incredible pace,"}, {"start": 398.03999999999996, "end": 401.15999999999997, "text": " and I hope that you also had some fun learning about it."}, {"start": 401.71999999999997, "end": 403.08, "text": " What a time to be alive!"}, {"start": 403.64, "end": 406.91999999999996, "text": " This episode has been supported by weights and biases."}, {"start": 406.91999999999996, "end": 412.67999999999995, "text": " In this post, they show you how to use their tool to extract text with all kinds of sizes,"}, {"start": 412.67999999999995, "end": 415.23999999999995, "text": " shapes, and orientations from your images."}, {"start": 415.24, "end": 420.92, "text": " During my PhD studies, I trained a ton of neural networks which were used in our experiments."}, {"start": 420.92, "end": 425.48, "text": " However, over time, there was just too much data in our repositories,"}, {"start": 425.48, "end": 429.08, "text": " and what I am looking for is not data, but insight."}, {"start": 429.08, "end": 434.12, "text": " And that's exactly how weights and biases helps you by organizing your experiments."}, {"start": 434.12, "end": 440.76, "text": " It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research,"}, {"start": 440.76, "end": 447.56, "text": " GitHub, and more. And get this, weights and biases is free for all individuals, academics,"}, {"start": 447.56, "end": 453.56, "text": " and open source projects. Make sure to visit them through wnba.com slash papers,"}, {"start": 453.56, "end": 458.44, "text": " or just click the link in the video description, and you can get a free demo today."}, {"start": 458.44, "end": 463.48, "text": " Our thanks to weights and biases for their long-standing support, and for helping us make better"}, {"start": 463.48, "end": 477.32, "text": " videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=_4fL4jnC8xQ
Is Simulating Wet Papers Possible? 📃💧
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/authors/RayTune-dcgan/reports/Ray-Tune-Distributed-Hyperparameter-Optimization-at-Scale--VmlldzoyMDEwNDY 📝 The paper "A moving least square reproducing kernel particle method for unified multiphase continuum simulation" is available here: https://cg.cs.tsinghua.edu.cn/papers/SIGASIA-2020-fluid.pdf 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Kato Jornai-Fahir. Yes, you see it correctly, this is a paper, own paper. The paper paper, if you will. And today you will witness some amazing works in the domain of computer graphics and physics simulations. There is so much progress in this area. For instance, we can simulate honey coiling, baking and melting, bouncy jelly, and many related phenomena. And none of these techniques use any machine learning. These are all good, old-fashioned handcrafted algorithms. And using these, we can simulate stretching and compression to the point that muscle movement simulations are possible. When attaching muscles to bones, as we move the character, the muscles move and contract accurately. What's more, this work can even perform muscle growth simulations. So, are we done here? Did these ingenious computer graphics researchers max out physics simulation where there is nothing else to do? Oh no, of course not. Look, this footage is from an earlier computer graphics paper that simulates viscosity and melting fluids. And what I would like you to look at here is not what it does, but what it doesn't do. It starts melting these armadillos beautifully. However, there is something that it doesn't do, which is mixing. The material starts separate and remain separate. Can we improve upon that somehow? Well, this no paper promises that and so much more that it truly makes my head spin. For instance, it can simulate hyperelastic, elastoplastic, viscous, fracturing and multi-face coupling behaviors. And most importantly, all of these can be simulated within the same framework. Not one paper for each behavior, one paper that can do all of these. That is absolutely insane. So, what does all that mean? Well, I say let's see them all right now through five super fun experiments. Experiment number one. Wet papers. As you see, this technique handles the ball of water. Okay, we've seen that before and what else? Well, it handles the paper too. Okay, that's getting better, but hold on to your papers and look, it also handles the water's interaction with the paper. Now we're talking. And careful with holding onto that paper, because if you do it correctly, this might happen. As you see, the arguments contained within this paper really hold water. Experiment number two. Fracturing. As you know, most computer graphics papers on physics simulation contain creative simulations to destroying armadillos in the most spectacular fashion. This work is of course no different. Yum. Experiment number three. This solution. Here we take a glass of water, add some starch powder. It starts piling up. And then slowly starts to dissolve. A note that the water itself also becomes stickier during the process. Number four. Dipping. We first take a piece of biscuit and dip it into the water. Note that the coupling works correctly here. In other words, the water now moves, but what is even better is that the biscuit started absorbing some of that water. And now when we repeat a part. Oh yes, excellent. And as a light transport researcher by trade, I love watching the shape of biscuits distorted here due to the refraction of the water. This is a beautiful demonstration of that phenomenon. And number five, the dog. What kind of dog you ask? Well, this virtual dog gets a big splash of water, starts shaking it off and manages to get rid of most of it. But only most of it. And it can do all of these using one algorithm. Not one per each of these beautiful phenomena, one technique that can perform all of these. There is absolutely amazing. But it does not stop there. It can also simulate snow and it not only does it well, but it does that swiftly. How swiftly? It simulated this a bit faster than one frame per second. The starch powder experiment was about one minute per frame. And the slowest example was the dog shaking off the bowl of water. The main reason for this is that it required near a quarter million particles of water and for hair. And when the algorithm computes these interactions between them, it can only advance the time in very small increments. It has to do this a hundred thousand times for each second of footage that you see here. Based on how much computation there is to do, that is really, really fast. And don't forget that the first law of paper says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. And even now, the generality of this system is truly something to behold. Congratulations to the authors on this amazing paper. What a time to be alive! So if you wish to read a beautifully written paper today that does not dissolve in your hands, I highly recommend this one. This episode has been supported by weights and biases. In this post, they show you how to use their tool to perform distributed hyper parameter optimization at scale. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.68, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Kato Jornai-Fahir."}, {"start": 4.68, "end": 9.0, "text": " Yes, you see it correctly, this is a paper, own paper."}, {"start": 9.0, "end": 11.040000000000001, "text": " The paper paper, if you will."}, {"start": 11.040000000000001, "end": 18.240000000000002, "text": " And today you will witness some amazing works in the domain of computer graphics and physics simulations."}, {"start": 18.240000000000002, "end": 21.04, "text": " There is so much progress in this area."}, {"start": 21.04, "end": 28.04, "text": " For instance, we can simulate honey coiling, baking and melting,"}, {"start": 28.04, "end": 31.96, "text": " bouncy jelly, and many related phenomena."}, {"start": 31.96, "end": 35.56, "text": " And none of these techniques use any machine learning."}, {"start": 35.56, "end": 39.64, "text": " These are all good, old-fashioned handcrafted algorithms."}, {"start": 39.64, "end": 47.480000000000004, "text": " And using these, we can simulate stretching and compression to the point that muscle movement simulations are possible."}, {"start": 47.480000000000004, "end": 55.28, "text": " When attaching muscles to bones, as we move the character, the muscles move and contract accurately."}, {"start": 55.28, "end": 60.32, "text": " What's more, this work can even perform muscle growth simulations."}, {"start": 60.32, "end": 62.4, "text": " So, are we done here?"}, {"start": 62.4, "end": 69.84, "text": " Did these ingenious computer graphics researchers max out physics simulation where there is nothing else to do?"}, {"start": 69.84, "end": 72.0, "text": " Oh no, of course not."}, {"start": 72.0, "end": 78.96000000000001, "text": " Look, this footage is from an earlier computer graphics paper that simulates viscosity and melting fluids."}, {"start": 78.96, "end": 85.28, "text": " And what I would like you to look at here is not what it does, but what it doesn't do."}, {"start": 85.28, "end": 88.39999999999999, "text": " It starts melting these armadillos beautifully."}, {"start": 88.39999999999999, "end": 93.83999999999999, "text": " However, there is something that it doesn't do, which is mixing."}, {"start": 93.83999999999999, "end": 98.0, "text": " The material starts separate and remain separate."}, {"start": 98.0, "end": 100.56, "text": " Can we improve upon that somehow?"}, {"start": 100.56, "end": 107.03999999999999, "text": " Well, this no paper promises that and so much more that it truly makes my head spin."}, {"start": 107.04, "end": 114.96000000000001, "text": " For instance, it can simulate hyperelastic, elastoplastic, viscous, fracturing and multi-face coupling behaviors."}, {"start": 114.96000000000001, "end": 120.32000000000001, "text": " And most importantly, all of these can be simulated within the same framework."}, {"start": 120.32000000000001, "end": 126.16000000000001, "text": " Not one paper for each behavior, one paper that can do all of these."}, {"start": 126.16000000000001, "end": 128.88, "text": " That is absolutely insane."}, {"start": 128.88, "end": 131.12, "text": " So, what does all that mean?"}, {"start": 131.12, "end": 136.72, "text": " Well, I say let's see them all right now through five super fun experiments."}, {"start": 136.72, "end": 138.72, "text": " Experiment number one."}, {"start": 138.72, "end": 140.0, "text": " Wet papers."}, {"start": 140.0, "end": 143.76, "text": " As you see, this technique handles the ball of water."}, {"start": 143.76, "end": 147.12, "text": " Okay, we've seen that before and what else?"}, {"start": 147.92, "end": 150.48, "text": " Well, it handles the paper too."}, {"start": 150.48, "end": 154.64, "text": " Okay, that's getting better, but hold on to your papers and look,"}, {"start": 155.04, "end": 158.56, "text": " it also handles the water's interaction with the paper."}, {"start": 159.44, "end": 160.96, "text": " Now we're talking."}, {"start": 160.96, "end": 167.12, "text": " And careful with holding onto that paper, because if you do it correctly, this might happen."}, {"start": 167.12, "end": 171.68, "text": " As you see, the arguments contained within this paper really hold water."}, {"start": 171.68, "end": 173.92000000000002, "text": " Experiment number two."}, {"start": 173.92000000000002, "end": 175.12, "text": " Fracturing."}, {"start": 175.12, "end": 180.24, "text": " As you know, most computer graphics papers on physics simulation contain creative simulations"}, {"start": 180.24, "end": 184.88, "text": " to destroying armadillos in the most spectacular fashion."}, {"start": 184.88, "end": 187.60000000000002, "text": " This work is of course no different."}, {"start": 187.60000000000002, "end": 189.12, "text": " Yum."}, {"start": 189.12, "end": 190.96, "text": " Experiment number three."}, {"start": 190.96, "end": 192.24, "text": " This solution."}, {"start": 192.24, "end": 196.32, "text": " Here we take a glass of water, add some starch powder."}, {"start": 196.32, "end": 198.24, "text": " It starts piling up."}, {"start": 198.24, "end": 201.6, "text": " And then slowly starts to dissolve."}, {"start": 201.6, "end": 205.76, "text": " A note that the water itself also becomes stickier during the process."}, {"start": 207.36, "end": 208.4, "text": " Number four."}, {"start": 208.4, "end": 209.52, "text": " Dipping."}, {"start": 209.52, "end": 212.96, "text": " We first take a piece of biscuit and dip it into the water."}, {"start": 213.68, "end": 216.16, "text": " Note that the coupling works correctly here."}, {"start": 216.16, "end": 221.52, "text": " In other words, the water now moves, but what is even better is that the biscuit"}, {"start": 221.52, "end": 224.07999999999998, "text": " started absorbing some of that water."}, {"start": 224.64, "end": 226.24, "text": " And now when we repeat a part."}, {"start": 229.92, "end": 231.35999999999999, "text": " Oh yes, excellent."}, {"start": 231.92, "end": 236.72, "text": " And as a light transport researcher by trade, I love watching the shape of biscuits"}, {"start": 236.72, "end": 240.32, "text": " distorted here due to the refraction of the water."}, {"start": 240.32, "end": 243.35999999999999, "text": " This is a beautiful demonstration of that phenomenon."}, {"start": 243.36, "end": 246.32000000000002, "text": " And number five, the dog."}, {"start": 247.04000000000002, "end": 248.4, "text": " What kind of dog you ask?"}, {"start": 248.96, "end": 252.16000000000003, "text": " Well, this virtual dog gets a big splash of water,"}, {"start": 252.8, "end": 257.12, "text": " starts shaking it off and manages to get rid of most of it."}, {"start": 257.92, "end": 259.36, "text": " But only most of it."}, {"start": 260.08000000000004, "end": 263.84000000000003, "text": " And it can do all of these using one algorithm."}, {"start": 264.56, "end": 270.32, "text": " Not one per each of these beautiful phenomena, one technique that can perform all of these."}, {"start": 270.32, "end": 272.96, "text": " There is absolutely amazing."}, {"start": 273.59999999999997, "end": 274.96, "text": " But it does not stop there."}, {"start": 274.96, "end": 280.08, "text": " It can also simulate snow and it not only does it well, but it does that swiftly."}, {"start": 280.88, "end": 281.59999999999997, "text": " How swiftly?"}, {"start": 282.32, "end": 286.08, "text": " It simulated this a bit faster than one frame per second."}, {"start": 286.64, "end": 289.92, "text": " The starch powder experiment was about one minute per frame."}, {"start": 289.92, "end": 294.0, "text": " And the slowest example was the dog shaking off the bowl of water."}, {"start": 294.48, "end": 300.08, "text": " The main reason for this is that it required near a quarter million particles of water"}, {"start": 300.08, "end": 300.88, "text": " and for hair."}, {"start": 301.52, "end": 304.96, "text": " And when the algorithm computes these interactions between them,"}, {"start": 304.96, "end": 308.4, "text": " it can only advance the time in very small increments."}, {"start": 308.88, "end": 314.15999999999997, "text": " It has to do this a hundred thousand times for each second of footage that you see here."}, {"start": 314.71999999999997, "end": 318.79999999999995, "text": " Based on how much computation there is to do, that is really, really fast."}, {"start": 319.36, "end": 324.24, "text": " And don't forget that the first law of paper says that research is a process."}, {"start": 324.24, "end": 329.12, "text": " Do not look at where we are, look at where we will be, two more papers down the line."}, {"start": 329.12, "end": 334.08, "text": " And even now, the generality of this system is truly something to behold."}, {"start": 334.08, "end": 337.28000000000003, "text": " Congratulations to the authors on this amazing paper."}, {"start": 337.28000000000003, "end": 338.88, "text": " What a time to be alive!"}, {"start": 340.16, "end": 345.76, "text": " So if you wish to read a beautifully written paper today that does not dissolve in your hands,"}, {"start": 345.76, "end": 347.52, "text": " I highly recommend this one."}, {"start": 348.08, "end": 351.36, "text": " This episode has been supported by weights and biases."}, {"start": 351.36, "end": 355.68, "text": " In this post, they show you how to use their tool to perform distributed"}, {"start": 355.68, "end": 358.24, "text": " hyper parameter optimization at scale."}, {"start": 358.24, "end": 363.28000000000003, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 363.28000000000003, "end": 366.88, "text": " Their system is designed to save you a ton of time and money,"}, {"start": 366.88, "end": 371.52, "text": " and it is actively used in projects at prestigious labs such as OpenAI,"}, {"start": 371.52, "end": 374.08, "text": " Toyota Research, GitHub, and more."}, {"start": 374.08, "end": 378.56, "text": " And the best part is that weights and biases is free for all individuals,"}, {"start": 378.56, "end": 380.88, "text": " academics, and open source projects."}, {"start": 380.88, "end": 383.44, "text": " It really is as good as it gets."}, {"start": 383.44, "end": 390.24, "text": " Make sure to visit them through wnb.com slash papers, or just click the link in the video description,"}, {"start": 390.24, "end": 392.32, "text": " and you can get a free demo today."}, {"start": 392.32, "end": 395.52, "text": " Our thanks to weights and biases for their long-standing support,"}, {"start": 395.52, "end": 398.32, "text": " and for helping us make better videos for you."}, {"start": 398.32, "end": 414.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=rzIiuGrOZAo
9 Years of Progress In Cloth Simulation! 🧶
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/carlolepelaars/numerai_tutorial/reports/Build-the-World-s-Open-Hedge-Fund-by-Modeling-the-Stock-Market--VmlldzoxODU0NTQ 📝 The paper "Homogenized Yarn-Level Cloth" is available here: http://visualcomputing.ist.ac.at/publications/2020/HYLC/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Károly Zsolnai-Fehér. This day is the perfect day to simulate the kinematics of yarn and cloth on our computers. As you just saw, this is not the usual intro that we use in every episode, so what could that be? Well, this is a simulation specifically made for us using a technique from today's paper. And it has a super high stitching density, which makes it all the better by the end of this video you will know exactly what that means and why that matters. But first, for context, I would like to show you what researchers were able to do in 2012 and we will see together how far we have come since. This previous work was about creating these highly detailed cloth geometries for digital characters. Here you see one of its coolest results where it shows how the simulated forces pull the entire piece of garment together. We start out by dreaming up a piece of cloth geometry and this simulator gradually transforms it into a real world version of that by subjecting it to real physical forces. This is a step that we call the yarn level relaxation. So this paper was published in 2012 and now nearly 9 years have passed, so I wonder how far we have come since. Well, we can still simulate knitted and woven materials through similar programs that we call direct yarn level simulations. Here's one. I think we can all agree that these are absolutely beautiful, so was the catch. The catch is that this is not free, there is a price we have to pay for these results. Look, whoa, these really take forever. We are talking several hours or for this one almost even an entire day to simulate just one piece of garment. And it gets worse. Look, this one takes more than two full days to compute. Imagine how long we would have to wait for complex scenes in a feature length movie with several characters. Now of course, this problem is very challenging and to solve it we have to perform a direct yarn level simulation. This means that every single strand of yarn is treated as an elastic rod and we have to compute how they react to external forces, bending deformations and more. That takes a great deal of computation. So, our question today is, can we do this in a more reasonable amount of time? Well, the first law of papers says that research is a process. Do not look at where we are, look at where we will be, two more papers down the line. Let's see if the law holds up here. This new paper promises to retain many important characteristics of the full simulation, but takes much less time to compute. That is amazing. Things stretch, bend and even curl up similarly, but the simulation time is cut down a great deal. How much less? Well, this one is five times faster. That's great. This one 20 times, oh my. But it gets better. Now hold onto your papers and look. This one is almost 60 times faster than the full yarn level simulation. My goodness. However, of course it's not perfect. In return, pulling effects on individual yarns is neglected, so we lose the look of this amazing holy geometry. I'd love to get that back. Now these examples were using materials with relatively small stitching density. The previous full yarn level simulation method scales with the number of yarn segments we add to the garment. So, what does this all mean? This means that higher stitching density gives us more yarn strands and the more yarn strands there are, the longer it takes to simulate them. In these cases you can see the knitting patterns, so there aren't that many yarns and even with that it still took multiple days to compute a simulation with one piece of garment. So, I hope you know what's coming. It can also simulate super high stitching densities efficiently. What does that mean? It means that it can also simulate materials like the satin example here. This is not bad by any means, but similar simulations can be done with much simpler simulators, so our question is why does this matter? Well, let's look at the backside here and marvel at this beautiful scene showcasing the second best curl of the day. Loving it. And now, I hope you're wondering what the best curl of the day is. Here goes, this is the one that was showcased in our intro, which is 2 minute papers curling up. You can only see this footage here. Beautiful. Huge congratulations and a big thank you to Gerox Spel, the first author of the paper, who simulated this 2 minute paper scene only for us and with a super high stitching density. That is quite an honor. Thank you so much. As I promised, we now understand exactly why the stitching density makes it all the better. And if you have been watching this series for a while, I am sure you have seen computer graphics researchers destroy armadillos in the most spectacular manner. Make sure to leave a comment if this is the case. That is super fun and I thought there must be a version of that for cloth simulations. And of course there is. And now please meet the yarn modelo. The naming game is very strong here. And just one more thing, Gerox Spel, the first author of this work was a student of mine in 2014 at the Technical University of Vienna where he finished a practical project in light simulation programs and he did excellent work there. Of course he did. And I will note that I was not part of this project in any way, I am just super happy to see him come so far since then. He is now nearing the completion of his PhD and this simulation paper of his was accepted to the CIGRAF conference. That is as good as it gets. Well done Gerox. This episode has been supported by weights and biases. In this post they show you how to use their tool to build the foundations for your own hedge fund and analyze the stock market using learning based tools. If you work with learning algorithms on a regular basis make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnbe.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.88, "end": 11.14, "text": " This day is the perfect day to simulate the kinematics of yarn and cloth on our computers."}, {"start": 11.14, "end": 17.02, "text": " As you just saw, this is not the usual intro that we use in every episode, so what could"}, {"start": 17.02, "end": 18.02, "text": " that be?"}, {"start": 18.02, "end": 24.42, "text": " Well, this is a simulation specifically made for us using a technique from today's paper."}, {"start": 24.42, "end": 29.580000000000002, "text": " And it has a super high stitching density, which makes it all the better by the end of"}, {"start": 29.580000000000002, "end": 34.94, "text": " this video you will know exactly what that means and why that matters."}, {"start": 34.94, "end": 40.620000000000005, "text": " But first, for context, I would like to show you what researchers were able to do in 2012"}, {"start": 40.620000000000005, "end": 44.46, "text": " and we will see together how far we have come since."}, {"start": 44.46, "end": 49.540000000000006, "text": " This previous work was about creating these highly detailed cloth geometries for digital"}, {"start": 49.540000000000006, "end": 50.540000000000006, "text": " characters."}, {"start": 50.54, "end": 56.6, "text": " Here you see one of its coolest results where it shows how the simulated forces pull the"}, {"start": 56.6, "end": 59.82, "text": " entire piece of garment together."}, {"start": 59.82, "end": 65.66, "text": " We start out by dreaming up a piece of cloth geometry and this simulator gradually transforms"}, {"start": 65.66, "end": 71.62, "text": " it into a real world version of that by subjecting it to real physical forces."}, {"start": 71.62, "end": 75.66, "text": " This is a step that we call the yarn level relaxation."}, {"start": 75.66, "end": 82.5, "text": " So this paper was published in 2012 and now nearly 9 years have passed, so I wonder how"}, {"start": 82.5, "end": 84.58, "text": " far we have come since."}, {"start": 84.58, "end": 89.66, "text": " Well, we can still simulate knitted and woven materials through similar programs that"}, {"start": 89.66, "end": 93.42, "text": " we call direct yarn level simulations."}, {"start": 93.42, "end": 94.42, "text": " Here's one."}, {"start": 94.42, "end": 101.42, "text": " I think we can all agree that these are absolutely beautiful, so was the catch."}, {"start": 101.42, "end": 106.86, "text": " The catch is that this is not free, there is a price we have to pay for these results."}, {"start": 106.86, "end": 111.3, "text": " Look, whoa, these really take forever."}, {"start": 111.3, "end": 118.02000000000001, "text": " We are talking several hours or for this one almost even an entire day to simulate just"}, {"start": 118.02000000000001, "end": 120.38, "text": " one piece of garment."}, {"start": 120.38, "end": 121.9, "text": " And it gets worse."}, {"start": 121.9, "end": 126.34, "text": " Look, this one takes more than two full days to compute."}, {"start": 126.34, "end": 131.18, "text": " Imagine how long we would have to wait for complex scenes in a feature length movie with"}, {"start": 131.18, "end": 133.14000000000001, "text": " several characters."}, {"start": 133.14000000000001, "end": 138.74, "text": " Now of course, this problem is very challenging and to solve it we have to perform a direct"}, {"start": 138.74, "end": 140.82, "text": " yarn level simulation."}, {"start": 140.82, "end": 146.22, "text": " This means that every single strand of yarn is treated as an elastic rod and we have to"}, {"start": 146.22, "end": 151.98000000000002, "text": " compute how they react to external forces, bending deformations and more."}, {"start": 151.98000000000002, "end": 154.22, "text": " That takes a great deal of computation."}, {"start": 154.22, "end": 159.70000000000002, "text": " So, our question today is, can we do this in a more reasonable amount of time?"}, {"start": 159.7, "end": 164.61999999999998, "text": " Well, the first law of papers says that research is a process."}, {"start": 164.61999999999998, "end": 169.98, "text": " Do not look at where we are, look at where we will be, two more papers down the line."}, {"start": 169.98, "end": 172.57999999999998, "text": " Let's see if the law holds up here."}, {"start": 172.57999999999998, "end": 178.57999999999998, "text": " This new paper promises to retain many important characteristics of the full simulation, but"}, {"start": 178.57999999999998, "end": 181.7, "text": " takes much less time to compute."}, {"start": 181.7, "end": 183.62, "text": " That is amazing."}, {"start": 183.62, "end": 191.22, "text": " Things stretch, bend and even curl up similarly, but the simulation time is cut down a great"}, {"start": 191.22, "end": 192.22, "text": " deal."}, {"start": 192.22, "end": 193.62, "text": " How much less?"}, {"start": 193.62, "end": 197.82, "text": " Well, this one is five times faster."}, {"start": 197.82, "end": 198.9, "text": " That's great."}, {"start": 198.9, "end": 202.78, "text": " This one 20 times, oh my."}, {"start": 202.78, "end": 204.18, "text": " But it gets better."}, {"start": 204.18, "end": 207.14000000000001, "text": " Now hold onto your papers and look."}, {"start": 207.14000000000001, "end": 213.18, "text": " This one is almost 60 times faster than the full yarn level simulation."}, {"start": 213.18, "end": 214.18, "text": " My goodness."}, {"start": 214.18, "end": 217.06, "text": " However, of course it's not perfect."}, {"start": 217.06, "end": 222.78, "text": " In return, pulling effects on individual yarns is neglected, so we lose the look of this"}, {"start": 222.78, "end": 224.58, "text": " amazing holy geometry."}, {"start": 224.58, "end": 227.3, "text": " I'd love to get that back."}, {"start": 227.3, "end": 232.98000000000002, "text": " Now these examples were using materials with relatively small stitching density."}, {"start": 232.98000000000002, "end": 238.42000000000002, "text": " The previous full yarn level simulation method scales with the number of yarn segments we"}, {"start": 238.42000000000002, "end": 239.9, "text": " add to the garment."}, {"start": 239.9, "end": 242.86, "text": " So, what does this all mean?"}, {"start": 242.86, "end": 248.18, "text": " This means that higher stitching density gives us more yarn strands and the more yarn strands"}, {"start": 248.18, "end": 251.54000000000002, "text": " there are, the longer it takes to simulate them."}, {"start": 251.54000000000002, "end": 256.74, "text": " In these cases you can see the knitting patterns, so there aren't that many yarns and even"}, {"start": 256.74, "end": 263.3, "text": " with that it still took multiple days to compute a simulation with one piece of garment."}, {"start": 263.3, "end": 266.06, "text": " So, I hope you know what's coming."}, {"start": 266.06, "end": 270.90000000000003, "text": " It can also simulate super high stitching densities efficiently."}, {"start": 270.90000000000003, "end": 272.34000000000003, "text": " What does that mean?"}, {"start": 272.34, "end": 277.65999999999997, "text": " It means that it can also simulate materials like the satin example here."}, {"start": 277.65999999999997, "end": 282.82, "text": " This is not bad by any means, but similar simulations can be done with much simpler"}, {"start": 282.82, "end": 286.9, "text": " simulators, so our question is why does this matter?"}, {"start": 286.9, "end": 293.38, "text": " Well, let's look at the backside here and marvel at this beautiful scene showcasing the"}, {"start": 293.38, "end": 296.05999999999995, "text": " second best curl of the day."}, {"start": 296.05999999999995, "end": 297.58, "text": " Loving it."}, {"start": 297.58, "end": 302.3, "text": " And now, I hope you're wondering what the best curl of the day is."}, {"start": 302.3, "end": 308.09999999999997, "text": " Here goes, this is the one that was showcased in our intro, which is 2 minute papers curling"}, {"start": 308.09999999999997, "end": 309.09999999999997, "text": " up."}, {"start": 309.09999999999997, "end": 311.7, "text": " You can only see this footage here."}, {"start": 311.7, "end": 313.21999999999997, "text": " Beautiful."}, {"start": 313.21999999999997, "end": 318.34, "text": " Huge congratulations and a big thank you to Gerox Spel, the first author of the paper,"}, {"start": 318.34, "end": 324.74, "text": " who simulated this 2 minute paper scene only for us and with a super high stitching density."}, {"start": 324.74, "end": 326.53999999999996, "text": " That is quite an honor."}, {"start": 326.54, "end": 327.86, "text": " Thank you so much."}, {"start": 327.86, "end": 334.70000000000005, "text": " As I promised, we now understand exactly why the stitching density makes it all the better."}, {"start": 334.70000000000005, "end": 338.54, "text": " And if you have been watching this series for a while, I am sure you have seen computer"}, {"start": 338.54, "end": 344.06, "text": " graphics researchers destroy armadillos in the most spectacular manner."}, {"start": 344.06, "end": 346.90000000000003, "text": " Make sure to leave a comment if this is the case."}, {"start": 346.90000000000003, "end": 352.94, "text": " That is super fun and I thought there must be a version of that for cloth simulations."}, {"start": 352.94, "end": 354.70000000000005, "text": " And of course there is."}, {"start": 354.7, "end": 357.78, "text": " And now please meet the yarn modelo."}, {"start": 357.78, "end": 360.78, "text": " The naming game is very strong here."}, {"start": 360.78, "end": 366.38, "text": " And just one more thing, Gerox Spel, the first author of this work was a student of mine"}, {"start": 366.38, "end": 372.09999999999997, "text": " in 2014 at the Technical University of Vienna where he finished a practical project in light"}, {"start": 372.09999999999997, "end": 376.26, "text": " simulation programs and he did excellent work there."}, {"start": 376.26, "end": 377.53999999999996, "text": " Of course he did."}, {"start": 377.53999999999996, "end": 382.65999999999997, "text": " And I will note that I was not part of this project in any way, I am just super happy"}, {"start": 382.66, "end": 385.5, "text": " to see him come so far since then."}, {"start": 385.5, "end": 390.86, "text": " He is now nearing the completion of his PhD and this simulation paper of his was accepted"}, {"start": 390.86, "end": 392.90000000000003, "text": " to the CIGRAF conference."}, {"start": 392.90000000000003, "end": 395.46000000000004, "text": " That is as good as it gets."}, {"start": 395.46000000000004, "end": 396.74, "text": " Well done Gerox."}, {"start": 396.74, "end": 400.02000000000004, "text": " This episode has been supported by weights and biases."}, {"start": 400.02000000000004, "end": 405.1, "text": " In this post they show you how to use their tool to build the foundations for your own"}, {"start": 405.1, "end": 409.86, "text": " hedge fund and analyze the stock market using learning based tools."}, {"start": 409.86, "end": 414.78000000000003, "text": " If you work with learning algorithms on a regular basis make sure to check out weights and"}, {"start": 414.78000000000003, "end": 415.78000000000003, "text": " biases."}, {"start": 415.78000000000003, "end": 420.54, "text": " Their system is designed to help you organize your experiments and it is so good it could"}, {"start": 420.54, "end": 425.82, "text": " shave off weeks or even months of work from your projects and is completely free for all"}, {"start": 425.82, "end": 429.86, "text": " individuals, academics and open source projects."}, {"start": 429.86, "end": 434.54, "text": " This really is as good as it gets and it is hardly a surprise that they are now used by"}, {"start": 434.54, "end": 438.38, "text": " over 200 companies and research institutions."}, {"start": 438.38, "end": 443.98, "text": " Make sure to visit them through wnbe.com slash papers or just click the link in the video"}, {"start": 443.98, "end": 447.02, "text": " description and you can get a free demo today."}, {"start": 447.02, "end": 451.78, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 451.78, "end": 453.18, "text": " better videos for you."}, {"start": 453.18, "end": 483.14, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=t7nO7MPcOGo
This AI Makes Beautiful Videos From Your Images! 🌊
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/authors/image-captioning/reports/Generate-Meaningful-Captions-for-Images-with-Attention-Models--VmlldzoxNzg0ODA 📝 The paper "Animating Pictures with Eulerian Motion Fields" is available here: https://eulerian.cs.washington.edu/ GPT-3 website layout tweet: https://twitter.com/sharifshameem/status/1283322990625607681 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1761027/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karo Jolai-Fehir. In June 2020, OpenAI published an incredible AI-based technique by the name ImageGPT. The problem here was simple to understand, but nearly impossible to actually do, so here goes, we give it an incomplete image, and we ask the AI to fill in the missing pixels. That is, of course, an immensely difficult task because these images made the picked any part of the world around us. It would have to know a great deal about our world to be able to continue the images, so how did it do? Let's have a look. This is undoubtedly a cat, but look, see that white part that is just starting? The interesting part has been sneakily cut out of the image. What could that be? A piece of paper, something else? Now, let's leave the dirty work to the machine and ask it to finish it. Oh yeah, that makes sense. Now, even better, let's have a look at this water droplet example too. We humans know that since we see the remnants of ripples over there too, there must be a splash, but the question is, does the AI know that? Oh yes, yes it does. Amazing. And the true image for reference. But wait a second, if image GPT could understand that this is a splash and finish the image like this, then here is an absolutely insane idea if a machine can understand that this is a splash, could it maybe not only finish the photo, but make a video out of it? Yes, that is indeed an absolutely insane idea, we like those around here. So, what do you think? Is this a reasonable question or is this still science fiction? Well, let's have a look at what this new learning base method does when looking at such an image. It will do something very similar to what we would do, look at the image, estimate the direction of the motion, recognize that these ripples should probably travel outwards and based on the fact that we've seen many splashes in our lives, if we had the artistic skill, we could surely fill in something similar. So, can the machine do it too? And now, hold on to your papers because this technique does exactly that. Whoa! Please meet Eulerian motion synthesis and it not only works amazingly well, but look at the output video, it even loops perfectly. Yum yum yum, I love it. And it works mostly on fluid and smoke. I like that. I like that a lot because fluids and smoke have difficult, but predictable motion. That is an excellent combination for us, especially given that you see plenty of those simulations on this channel. So, if you are a long time fellow scholar, you already have a key knife for them. Here are a few example images paired with the synthesized motion fields, these define the trajectory of each pixel or in other words regions that the AI thinks should be animated and how it thinks should be animated. Now, it gets better, I have found three things that I did not expect to work, but was pleasantly surprised that they did. One, reflections, kind of work. Two, fire, kind of works. And now, if you have been holding on to your paper so far, now squeeze that paper because here comes the best one. Three, my beard works too. Yes, you heard it right. Now, first things first, this is not any kind of beard, this is an algorithmic beard that was made by an AI. And now, it is animated as if it were a piece of fluid using a different AI. Of course, this is not supposed to be a correct result, just a happy accident, but in any case, this sounds like something straight out of a science fiction movie. I also like how this has a nice Obi-Wan Kenobi quality to it, loving it. Thank you very much to my friend Oliver Wong and the authors for being so kind and generating these results only for us. That is a huge honor. Thank you. This previous work is from 2019 and creates high quality motion, but has a limited understanding of the scene itself. And of course, let's see how the new method fares in these cases. Oh yeah, this is a huge leap forward. And what I like even better here is that no research techniques often provide different trade-offs than previous methods, but are really strictly better than them. In other words, competing techniques usually do some things better and some other things worse than their predecessors, but not this. Look, this is so much better across the board. That is such a rare sight. Amazing. Now, of course, not even this technique is perfect. For example, this part of the image should have been animated, but remains stationary. Also, even though it did well with reflections, reflection is a tougher nut to crack. Finally, thin geometry also still remains a challenge, but this was one paper that made the impossible possible and just think about what we will be able to do two more papers down the line. My goodness. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their tool to get a neural network to generate captions for your images using an attention model. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but inside. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this. Weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Jolai-Fehir."}, {"start": 4.5600000000000005, "end": 12.56, "text": " In June 2020, OpenAI published an incredible AI-based technique by the name ImageGPT."}, {"start": 12.56, "end": 18.080000000000002, "text": " The problem here was simple to understand, but nearly impossible to actually do,"}, {"start": 18.080000000000002, "end": 24.72, "text": " so here goes, we give it an incomplete image, and we ask the AI to fill in the missing pixels."}, {"start": 24.72, "end": 30.24, "text": " That is, of course, an immensely difficult task because these images made the"}, {"start": 30.24, "end": 35.92, "text": " picked any part of the world around us. It would have to know a great deal about our world"}, {"start": 35.92, "end": 41.599999999999994, "text": " to be able to continue the images, so how did it do? Let's have a look."}, {"start": 42.239999999999995, "end": 48.480000000000004, "text": " This is undoubtedly a cat, but look, see that white part that is just starting?"}, {"start": 48.48, "end": 55.839999999999996, "text": " The interesting part has been sneakily cut out of the image. What could that be? A piece of paper,"}, {"start": 56.48, "end": 62.31999999999999, "text": " something else? Now, let's leave the dirty work to the machine and ask it to finish it."}, {"start": 63.519999999999996, "end": 70.47999999999999, "text": " Oh yeah, that makes sense. Now, even better, let's have a look at this water droplet example too."}, {"start": 71.03999999999999, "end": 76.0, "text": " We humans know that since we see the remnants of ripples over there too,"}, {"start": 76.0, "end": 80.48, "text": " there must be a splash, but the question is, does the AI know that?"}, {"start": 81.92, "end": 89.68, "text": " Oh yes, yes it does. Amazing. And the true image for reference. But wait a second,"}, {"start": 90.24, "end": 96.16, "text": " if image GPT could understand that this is a splash and finish the image like this,"}, {"start": 96.16, "end": 102.88, "text": " then here is an absolutely insane idea if a machine can understand that this is a splash,"}, {"start": 102.88, "end": 111.36, "text": " could it maybe not only finish the photo, but make a video out of it? Yes, that is indeed an"}, {"start": 111.36, "end": 117.92, "text": " absolutely insane idea, we like those around here. So, what do you think? Is this a reasonable"}, {"start": 117.92, "end": 125.19999999999999, "text": " question or is this still science fiction? Well, let's have a look at what this new learning"}, {"start": 125.19999999999999, "end": 130.96, "text": " base method does when looking at such an image. It will do something very similar to what we would"}, {"start": 130.96, "end": 137.44, "text": " do, look at the image, estimate the direction of the motion, recognize that these ripples should"}, {"start": 137.44, "end": 143.76000000000002, "text": " probably travel outwards and based on the fact that we've seen many splashes in our lives,"}, {"start": 143.76000000000002, "end": 150.56, "text": " if we had the artistic skill, we could surely fill in something similar. So, can the machine"}, {"start": 150.56, "end": 156.88, "text": " do it too? And now, hold on to your papers because this technique does exactly that."}, {"start": 156.88, "end": 166.0, "text": " Whoa! Please meet Eulerian motion synthesis and it not only works amazingly well,"}, {"start": 166.0, "end": 174.88, "text": " but look at the output video, it even loops perfectly. Yum yum yum, I love it. And it works mostly"}, {"start": 174.88, "end": 181.68, "text": " on fluid and smoke. I like that. I like that a lot because fluids and smoke have difficult,"}, {"start": 181.68, "end": 188.08, "text": " but predictable motion. That is an excellent combination for us, especially given that you see"}, {"start": 188.08, "end": 193.28, "text": " plenty of those simulations on this channel. So, if you are a long time fellow scholar,"}, {"start": 193.28, "end": 199.52, "text": " you already have a key knife for them. Here are a few example images paired with the synthesized"}, {"start": 199.52, "end": 206.16, "text": " motion fields, these define the trajectory of each pixel or in other words regions that the AI"}, {"start": 206.16, "end": 214.07999999999998, "text": " thinks should be animated and how it thinks should be animated. Now, it gets better,"}, {"start": 214.07999999999998, "end": 220.4, "text": " I have found three things that I did not expect to work, but was pleasantly surprised that they did."}, {"start": 221.12, "end": 231.92, "text": " One, reflections, kind of work. Two, fire, kind of works. And now, if you have been holding on to"}, {"start": 231.92, "end": 239.51999999999998, "text": " your paper so far, now squeeze that paper because here comes the best one. Three, my beard works too."}, {"start": 240.32, "end": 246.64, "text": " Yes, you heard it right. Now, first things first, this is not any kind of beard,"}, {"start": 246.64, "end": 254.16, "text": " this is an algorithmic beard that was made by an AI. And now, it is animated as if it were a"}, {"start": 254.16, "end": 260.4, "text": " piece of fluid using a different AI. Of course, this is not supposed to be a correct result,"}, {"start": 260.4, "end": 266.15999999999997, "text": " just a happy accident, but in any case, this sounds like something straight out of a science"}, {"start": 266.15999999999997, "end": 272.64, "text": " fiction movie. I also like how this has a nice Obi-Wan Kenobi quality to it, loving it."}, {"start": 273.12, "end": 278.47999999999996, "text": " Thank you very much to my friend Oliver Wong and the authors for being so kind and generating"}, {"start": 278.47999999999996, "end": 286.08, "text": " these results only for us. That is a huge honor. Thank you. This previous work is from 2019 and"}, {"start": 286.08, "end": 291.52, "text": " creates high quality motion, but has a limited understanding of the scene itself."}, {"start": 292.96, "end": 300.47999999999996, "text": " And of course, let's see how the new method fares in these cases. Oh yeah, this is a huge leap forward."}, {"start": 301.68, "end": 306.79999999999995, "text": " And what I like even better here is that no research techniques often provide different"}, {"start": 306.79999999999995, "end": 312.79999999999995, "text": " trade-offs than previous methods, but are really strictly better than them. In other words,"}, {"start": 312.8, "end": 319.44, "text": " competing techniques usually do some things better and some other things worse than their predecessors,"}, {"start": 319.44, "end": 328.24, "text": " but not this. Look, this is so much better across the board. That is such a rare sight. Amazing."}, {"start": 329.12, "end": 334.48, "text": " Now, of course, not even this technique is perfect. For example, this part of the image should"}, {"start": 334.48, "end": 341.36, "text": " have been animated, but remains stationary. Also, even though it did well with reflections,"}, {"start": 341.36, "end": 348.40000000000003, "text": " reflection is a tougher nut to crack. Finally, thin geometry also still remains a challenge,"}, {"start": 348.40000000000003, "end": 355.44, "text": " but this was one paper that made the impossible possible and just think about what we will be able to do"}, {"start": 355.44, "end": 361.76, "text": " two more papers down the line. My goodness. What a time to be alive. This episode has been"}, {"start": 361.76, "end": 367.6, "text": " supported by weights and biases. In this post, they show you how to use their tool to get a"}, {"start": 367.6, "end": 374.16, "text": " neural network to generate captions for your images using an attention model. During my PhD studies,"}, {"start": 374.16, "end": 379.68, "text": " I trained a ton of neural networks which were used in our experiments. However, over time,"}, {"start": 379.68, "end": 386.08000000000004, "text": " there was just too much data in our repositories and what I am looking for is not data, but inside."}, {"start": 386.56, "end": 392.56, "text": " And that's exactly how weights and biases helps you by organizing your experiments. It is used by more"}, {"start": 392.56, "end": 399.68, "text": " than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more."}, {"start": 399.68, "end": 406.72, "text": " And get this. Weight and biases is free for all individuals, academics, and open source projects."}, {"start": 406.72, "end": 413.44, "text": " Make sure to visit them through wnba.com slash papers or just click the link in the video description"}, {"start": 413.44, "end": 418.4, "text": " and you can get a free demo today. Our thanks to weights and biases for their long standing"}, {"start": 418.4, "end": 423.28, "text": " support and for helping us make better videos for you. Thanks for watching and for your generous"}, {"start": 423.28, "end": 453.11999999999995, "text": " support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=adHjNqh5iGY
Can You Put All This In a Photo? 🤳
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/NSFF/reports/Overview-Neural-Scene-Flow-Fields-NSFF-for-Space-Time-View-Synthesis-of-Dynamic-Scenes--Vmlldzo1NzA1ODI 📝 The paper "OmniPhotos: Casual 360° VR Photography" is available here: https://richardt.name/publications/omniphotos/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Kato Zsolnai-Fehir. Virtual Reality VR in short is maturing at a rapid pace and its promise is truly incredible. If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment, we could train pilots with better flight simulators, teach astronauts to deal with zero gravity simulations, you name it. As you will see that with today's paper we are able to visit nearly any place from afar. Now to be able to do anything in a virtual world we have to put on a head-munted display that can tell the orientation of our head and often hands at all times. So what can we do with this? Oh boy, a great deal. For instance we can type on a virtual keyboard or implement all kinds of virtual user interfaces that we can interact with. We can also organize imaginary boxes and of course we can't leave out the two minute papers favorite going into a physics simulation and playing with it with our own hands. In this previous work hand-hand interactions did not work too well which was addressed one more paper down the line which absolutely nailed the solution. This follow-up work would look at our hands in challenging hand-hand interactions and could deal with deformations lots of self-contact and self-occlusion. Take a look at this footage. And look interestingly they also recorded the real hand model with gloves on. We might think what a curious design decision. What could that be for? Well what you see here is not a pair of gloves, what you see here is the reconstruction of the hand by this follow-up paper. What if we wish to look at a historic landmark from before? Well in this case someone needs to capture a 300-year-old hand-hand. This is all great when we play in a computer game because the entire world around us was previously modeled so we can look and go anywhere, anytime. But what about operating in the real world? What if we wish to look at a historic landmark from before? Well in this case someone needs to capture a 360 degree photo of it because we can turn our head around and lock behind things. And this is what today's paper will be about. This new paper is called Omni Photos and it helps us produce these 360 view synthesis and when we put on that head mounted display we can really get a good feel of a remote place, a group photo or an important family festivity. So clearly the value proposition is excellent but we have two questions. One, what do we have to do for it? Flailing. Yes we need to be flailing. You see we need to attach a consumer 360 camera to a selfie stick and start flailing for about 10 seconds like this. This is a crazy idea because now we created a ton of raw data roughly what you see here. So this is a deluge of information and the algorithm needs to crystallize all this mess into a proper 360 photograph. What is even more difficult here is that this flailing will almost never create a perfect circle trajectory so the algorithm first has to estimate the exact camera positions and view directions. And hold on to your papers because the entirety of this work is handcrafted, no machine learning is inside and the result is a quite general technique or in other words it works on a wide variety of real world scenes you see a good selection of those here. Excellent. Our second question is this is of course not the first method published in this area so how does it relate to previous techniques? Is it really better? Well let's see for ourselves. Previous methods either suffered from not allowing too much motion or the other ones that give us more freedom to move around did it by introducing quite a bit of warping into the outputs. And now let's see if the new method improves upon that. Oh yeah, a great deal. Look, we have the advantages of both methods, we can move around freely and additionally there is much less warping than here. Now of course not even this new technique is perfect. If you look behind the building you see that the warping hasn't been completely eliminated but it is a big step ahead of the previous paper. We will look at some more side by side comparisons, one more bonus question, what about memory consumption? Well, it eats over a gigabyte of memory. That is typically not too much of a problem for desktop computers but we might need a little optimization if we wish to do these computations on a mobile device. And now comes the best part. You can browse these omnifortals online through the link in the video description and even the source code and the Windows-based demo is available that works with and without a VR headset. Try it out and let me know in the comments how it went. So with that we can create these beautiful omnifortals cheaply and efficiently and navigate the real world as if it were a computer game. What a time to be alive. What you see here is a report for a previous paper that we covered in this series which was made by Wades and Biasis. Wades and Biasis provides tools to track your experiments in your deep learning projects. Your system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that Wades and Biasis is free for all individuals, academics and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.96, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Kato Zsolnai-Fehir."}, {"start": 4.96, "end": 12.280000000000001, "text": " Virtual Reality VR in short is maturing at a rapid pace and its promise is truly incredible."}, {"start": 12.280000000000001, "end": 17.6, "text": " If one day it comes to fruition, doctors could be trained to perform surgery in a virtual"}, {"start": 17.6, "end": 23.48, "text": " environment, we could train pilots with better flight simulators, teach astronauts to deal"}, {"start": 23.48, "end": 26.84, "text": " with zero gravity simulations, you name it."}, {"start": 26.84, "end": 32.96, "text": " As you will see that with today's paper we are able to visit nearly any place from afar."}, {"start": 32.96, "end": 38.68, "text": " Now to be able to do anything in a virtual world we have to put on a head-munted display"}, {"start": 38.68, "end": 44.32, "text": " that can tell the orientation of our head and often hands at all times."}, {"start": 44.32, "end": 46.32, "text": " So what can we do with this?"}, {"start": 46.32, "end": 48.480000000000004, "text": " Oh boy, a great deal."}, {"start": 48.480000000000004, "end": 54.879999999999995, "text": " For instance we can type on a virtual keyboard or implement all kinds of virtual user interfaces"}, {"start": 54.879999999999995, "end": 56.8, "text": " that we can interact with."}, {"start": 56.8, "end": 62.32, "text": " We can also organize imaginary boxes and of course we can't leave out the two minute"}, {"start": 62.32, "end": 69.39999999999999, "text": " papers favorite going into a physics simulation and playing with it with our own hands."}, {"start": 69.39999999999999, "end": 74.2, "text": " In this previous work hand-hand interactions did not work too well which was addressed"}, {"start": 74.2, "end": 79.24, "text": " one more paper down the line which absolutely nailed the solution."}, {"start": 79.24, "end": 84.24, "text": " This follow-up work would look at our hands in challenging hand-hand interactions and"}, {"start": 84.24, "end": 90.32, "text": " could deal with deformations lots of self-contact and self-occlusion."}, {"start": 90.32, "end": 93.03999999999999, "text": " Take a look at this footage."}, {"start": 93.03999999999999, "end": 99.03999999999999, "text": " And look interestingly they also recorded the real hand model with gloves on."}, {"start": 99.03999999999999, "end": 102.6, "text": " We might think what a curious design decision."}, {"start": 102.6, "end": 104.19999999999999, "text": " What could that be for?"}, {"start": 104.19999999999999, "end": 109.64, "text": " Well what you see here is not a pair of gloves, what you see here is the reconstruction"}, {"start": 109.64, "end": 113.36, "text": " of the hand by this follow-up paper."}, {"start": 113.36, "end": 115.6, "text": " What if we wish to look at a historic landmark from before?"}, {"start": 115.6, "end": 121.48, "text": " Well in this case someone needs to capture a 300-year-old hand-hand."}, {"start": 121.48, "end": 126.96, "text": " This is all great when we play in a computer game because the entire world around us was"}, {"start": 126.96, "end": 132.8, "text": " previously modeled so we can look and go anywhere, anytime."}, {"start": 132.8, "end": 135.88, "text": " But what about operating in the real world?"}, {"start": 135.88, "end": 139.28, "text": " What if we wish to look at a historic landmark from before?"}, {"start": 139.28, "end": 146.4, "text": " Well in this case someone needs to capture a 360 degree photo of it because we can turn"}, {"start": 146.4, "end": 149.8, "text": " our head around and lock behind things."}, {"start": 149.8, "end": 152.92000000000002, "text": " And this is what today's paper will be about."}, {"start": 152.92000000000002, "end": 159.56, "text": " This new paper is called Omni Photos and it helps us produce these 360 view synthesis"}, {"start": 159.56, "end": 164.68, "text": " and when we put on that head mounted display we can really get a good feel of a remote"}, {"start": 164.68, "end": 169.52, "text": " place, a group photo or an important family festivity."}, {"start": 169.52, "end": 175.96, "text": " So clearly the value proposition is excellent but we have two questions."}, {"start": 175.96, "end": 178.88, "text": " One, what do we have to do for it?"}, {"start": 178.88, "end": 179.88, "text": " Flailing."}, {"start": 179.88, "end": 182.56, "text": " Yes we need to be flailing."}, {"start": 182.56, "end": 188.92000000000002, "text": " You see we need to attach a consumer 360 camera to a selfie stick and start flailing for"}, {"start": 188.92000000000002, "end": 192.32, "text": " about 10 seconds like this."}, {"start": 192.32, "end": 199.23999999999998, "text": " This is a crazy idea because now we created a ton of raw data roughly what you see here."}, {"start": 199.23999999999998, "end": 205.32, "text": " So this is a deluge of information and the algorithm needs to crystallize all this mess"}, {"start": 205.32, "end": 210.16, "text": " into a proper 360 photograph."}, {"start": 210.16, "end": 215.16, "text": " What is even more difficult here is that this flailing will almost never create a perfect"}, {"start": 215.16, "end": 221.64, "text": " circle trajectory so the algorithm first has to estimate the exact camera positions and"}, {"start": 221.64, "end": 223.44, "text": " view directions."}, {"start": 223.44, "end": 229.27999999999997, "text": " And hold on to your papers because the entirety of this work is handcrafted, no machine learning"}, {"start": 229.27999999999997, "end": 235.27999999999997, "text": " is inside and the result is a quite general technique or in other words it works on a wide"}, {"start": 235.27999999999997, "end": 241.07999999999998, "text": " variety of real world scenes you see a good selection of those here."}, {"start": 241.07999999999998, "end": 242.07999999999998, "text": " Excellent."}, {"start": 242.07999999999998, "end": 247.44, "text": " Our second question is this is of course not the first method published in this area"}, {"start": 247.44, "end": 251.04, "text": " so how does it relate to previous techniques?"}, {"start": 251.04, "end": 252.79999999999998, "text": " Is it really better?"}, {"start": 252.79999999999998, "end": 255.35999999999999, "text": " Well let's see for ourselves."}, {"start": 255.35999999999999, "end": 261.36, "text": " Previous methods either suffered from not allowing too much motion or the other ones that"}, {"start": 261.36, "end": 266.71999999999997, "text": " give us more freedom to move around did it by introducing quite a bit of warping into"}, {"start": 266.71999999999997, "end": 268.44, "text": " the outputs."}, {"start": 268.44, "end": 272.52, "text": " And now let's see if the new method improves upon that."}, {"start": 272.52, "end": 274.92, "text": " Oh yeah, a great deal."}, {"start": 274.92, "end": 281.56, "text": " Look, we have the advantages of both methods, we can move around freely and additionally there"}, {"start": 281.56, "end": 284.36, "text": " is much less warping than here."}, {"start": 284.36, "end": 287.64000000000004, "text": " Now of course not even this new technique is perfect."}, {"start": 287.64000000000004, "end": 293.16, "text": " If you look behind the building you see that the warping hasn't been completely eliminated"}, {"start": 293.16, "end": 297.08000000000004, "text": " but it is a big step ahead of the previous paper."}, {"start": 297.08000000000004, "end": 302.6, "text": " We will look at some more side by side comparisons, one more bonus question, what about memory"}, {"start": 302.6, "end": 303.6, "text": " consumption?"}, {"start": 303.6, "end": 307.76000000000005, "text": " Well, it eats over a gigabyte of memory."}, {"start": 307.76000000000005, "end": 312.88, "text": " That is typically not too much of a problem for desktop computers but we might need a little"}, {"start": 312.88, "end": 317.52000000000004, "text": " optimization if we wish to do these computations on a mobile device."}, {"start": 317.52000000000004, "end": 319.52000000000004, "text": " And now comes the best part."}, {"start": 319.52000000000004, "end": 324.28000000000003, "text": " You can browse these omnifortals online through the link in the video description and even"}, {"start": 324.28000000000003, "end": 330.36, "text": " the source code and the Windows-based demo is available that works with and without"}, {"start": 330.36, "end": 332.04, "text": " a VR headset."}, {"start": 332.04, "end": 335.48, "text": " Try it out and let me know in the comments how it went."}, {"start": 335.48, "end": 341.52000000000004, "text": " So with that we can create these beautiful omnifortals cheaply and efficiently and navigate"}, {"start": 341.52000000000004, "end": 345.24, "text": " the real world as if it were a computer game."}, {"start": 345.24, "end": 347.0, "text": " What a time to be alive."}, {"start": 347.0, "end": 351.52000000000004, "text": " What you see here is a report for a previous paper that we covered in this series which was"}, {"start": 351.52000000000004, "end": 353.88, "text": " made by Wades and Biasis."}, {"start": 353.88, "end": 358.40000000000003, "text": " Wades and Biasis provides tools to track your experiments in your deep learning projects."}, {"start": 358.4, "end": 363.15999999999997, "text": " Your system is designed to save you a ton of time and money and it is actively used"}, {"start": 363.15999999999997, "end": 369.4, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 369.4, "end": 374.52, "text": " And the best part is that Wades and Biasis is free for all individuals, academics and"}, {"start": 374.52, "end": 376.0, "text": " open source projects."}, {"start": 376.0, "end": 378.67999999999995, "text": " It really is as good as it gets."}, {"start": 378.67999999999995, "end": 384.35999999999996, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 384.35999999999996, "end": 387.44, "text": " description and you can get a free demo today."}, {"start": 387.44, "end": 397.04, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=HNJPasJUGqs
Do Neural Networks Think Like Our Brain? OpenAI Answers! 🧠
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/gudgud96/big-sleep-test/reports/Image-Generation-Based-on-Abstract-Concepts-using-CLIP-BigGAN--Vmlldzo1MjA2MTE 📝 The paper "Multimodal Neurons in Artificial Neural Networks" is available here: https://openai.com/blog/multimodal-neurons/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-2900362/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two Minute Dippers with Dr. Karojona Ifehir. Today we are going to cover many important questions in life. For instance, who is this? This is Hallibary. Now I'll show you this. Who is it? It is, of course, also Hallibary. And if I show you this piece of text, who does it refer to? Again, Hallibary. So, why are these questions interesting? Well, an earlier paper found out from brain readings that we indeed have person neurons in our brain. These are neurons specialized to recognize a particular human being. That is quite interesting. And what is even more interesting is not that we have person neurons, but that these neurons are multimodal. What does that mean? This means that we understand the essence of what makes Hallibary, regardless of whether it is a photo, a drawing, or anything else. I see. Alright. Well then, our first question today is, do neuron networks also have multimodal neurons? Not necessarily. The human brain is an inspiration for an artificial neuron network that we can simulate on our computers, but do they work like the brain? Well, if we study their inner workings, our likely answer will be no, not in the slightest. But still, no one can stop us from a little experimentation, so let's try this for a common neural network architecture. This neuron responds to human faces and says that this is indeed a human face. So far so good. Now, if we provide it with a drawing of a human face, it won't recognize it to be a face. Well, so much for this multimodal idea, this one is surely not a brain in a jar. But wait, we don't give up so easily around here. This is not the only neural network architecture that exists. Let's grab a different one. This one is called OpenAI's Clip, and it is remarkably good at generalizing concepts. Let's see how it can deal with the same problem. Yes, this neuron responds to spiders and spider-man. That's the easy part. Now please hold on to your papers because now comes the hard part. Drawings and comics of spiders and spider-man. Yes, it responds to that too. Wonderful. Now comes the final boss, which is spider-related writings. And it responds to that too. Now of course, this doesn't mean that this neural network would be a brain in a jar, but it is a tiny bit closer to our thinking than previous architectures. And now comes the best part. This insight opens up the possibility for three amazing experiments. Experiment number one. Essence. So it appears to understand the essence of a concept or a person. That is absolutely amazing. So I wonder if we can turn this problem around and ask what it thinks about different concepts. It would be the equivalent to saying, give me all things spiders and spider-man. Let's do that with Lady Gaga. It says this is the essence of Lady Gaga. We get the smug smile very good. And it says that the essence of Jesus Christ is this and it also includes the crown of thorns. So far, flying colors. Now we will see some images of feelings, some of you might find some of them disturbing. I think the vast majority of you humans will be just fine looking at them, but I wanted to let you know just in case. So what is the essence of someone being shocked? Well, this. I can attest to that this is basically me when reading this paper. My eyes were popping out just like this. Sleepiness. Yes, that is a coffee person before coffee or right. The happiness, crying and seriousness neurons also embody these feelings really well. Experiment number two. Adversarial attacks. We know that open AI's clip responds to photos and drawings of the same thing. So let's try some nasty attacks involving combining the two. When we give it these images, it can classify them with ease. This is an apple. This is a laptop, a mug and so on. Nothing too crazy going on here. However, now let's prepare a nasty adversarial attack. Previously sophisticated techniques were developed to fool a neural network by adding some nearly imperceptible noise to an image. Let's have a look at how this works. First, we present the previous neural network with an image of a bus and it will successfully tell us that yes, this is indeed a bus. Of course it does. Now we show it not an image of a bus, but a bus plus some carefully crafted noise that is barely perceptible that forces the neural network to misclassify it as an ostrich. I will stress that this is not any kind of noise, but the kind of noise that exploits biases in the neural network, which is by no means trivial to craft. So now I hope you are expecting a sophisticated adversarial attack against the wonderful clip neural network. Yes, that will do. Or will it? Let's see together. Yes, indeed I don't know if you knew, but this is not an apple, this is a pizza. And so is this one. The neural network fell for these ones, but it was able to resist this sophisticated attack in the case of the coffee mug and the phone. Perhaps the pizza labels had too small a footprint in an image, so let's try an even more sophisticated version of this attack. Now you may think that this is a Chihuahua, but that is completely wrong because this is a pizza indeed. Not a Chihuahua inside anywhere in this image. No, no. So what did we learn here? Well, interestingly, this clip neural network is more general than previous techniques, however, its superpowers come at a price. And that price says that it can be exploited easily with simple systematic attacks. That is a great lesson indeed. Experiment number three, understanding feelings. Now this will be one heck of an experiment. We will try to answer the age old question, which is how would you describe feelings to a machine? Well, it's hard to explain such a concept, but all humans understand what being bored means. However, luckily, these neural networks have neurons and they can use those to explain to us what they think about different concepts. An interesting idea here is that feelings could sort of emerge as a combination of other more elementary neurons that are already understood. If this sounds a little nebulous, let's go with that example. What does the machine think it means that someone is bored? Well, it says that bored is relaxed plus grumpy. This isn't quite the way I think about it, but not bad at all little machine. I like the way you think. Let's try one more. How do we explain to a machine what a surprise means? Well it says surprise is celebration plus shock. Nice. What about madness? Let's see, evil plus serious plus a hint of mental illness. And when talking about combinations, there are two more examples that I really liked. If we are looking for text that embodies evil, we get something like this. And now give me an evil building. Oh yes, I think that works really well, but there are internet forums where we have black belt experts specializing in this very topic. So if you're one of them, please let me know in the comments what you think. The paper also contains a ton more information. For instance, there is an experiment with the strupe effect. This explores whether the neural network reacts to the meaning of the text or the color of the text. I will only tease this because I would really like you to read the paper which is available below in the video description. So there we go, neural networks are by no means brains in a jar. They are very much computer programs. However, clip has some similarities and we also found that there is a cost to death. What a time to be alive. What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is two Minute Dippers with Dr. Karojona Ifehir."}, {"start": 4.76, "end": 9.14, "text": " Today we are going to cover many important questions in life."}, {"start": 9.14, "end": 11.92, "text": " For instance, who is this?"}, {"start": 11.92, "end": 14.120000000000001, "text": " This is Hallibary."}, {"start": 14.120000000000001, "end": 17.44, "text": " Now I'll show you this. Who is it?"}, {"start": 17.44, "end": 20.400000000000002, "text": " It is, of course, also Hallibary."}, {"start": 20.400000000000002, "end": 24.88, "text": " And if I show you this piece of text, who does it refer to?"}, {"start": 24.88, "end": 26.96, "text": " Again, Hallibary."}, {"start": 26.96, "end": 30.080000000000002, "text": " So, why are these questions interesting?"}, {"start": 30.080000000000002, "end": 36.18, "text": " Well, an earlier paper found out from brain readings that we indeed have person neurons"}, {"start": 36.18, "end": 37.620000000000005, "text": " in our brain."}, {"start": 37.620000000000005, "end": 42.14, "text": " These are neurons specialized to recognize a particular human being."}, {"start": 42.14, "end": 44.6, "text": " That is quite interesting."}, {"start": 44.6, "end": 49.84, "text": " And what is even more interesting is not that we have person neurons, but that these neurons"}, {"start": 49.84, "end": 51.64, "text": " are multimodal."}, {"start": 51.64, "end": 53.08, "text": " What does that mean?"}, {"start": 53.08, "end": 57.879999999999995, "text": " This means that we understand the essence of what makes Hallibary, regardless of whether"}, {"start": 57.879999999999995, "end": 61.519999999999996, "text": " it is a photo, a drawing, or anything else."}, {"start": 61.519999999999996, "end": 62.519999999999996, "text": " I see."}, {"start": 62.519999999999996, "end": 63.519999999999996, "text": " Alright."}, {"start": 63.519999999999996, "end": 70.52, "text": " Well then, our first question today is, do neuron networks also have multimodal neurons?"}, {"start": 70.52, "end": 71.68, "text": " Not necessarily."}, {"start": 71.68, "end": 78.2, "text": " The human brain is an inspiration for an artificial neuron network that we can simulate on our computers,"}, {"start": 78.2, "end": 80.4, "text": " but do they work like the brain?"}, {"start": 80.4, "end": 86.72, "text": " Well, if we study their inner workings, our likely answer will be no, not in the slightest."}, {"start": 86.72, "end": 92.04, "text": " But still, no one can stop us from a little experimentation, so let's try this for a common"}, {"start": 92.04, "end": 94.24000000000001, "text": " neural network architecture."}, {"start": 94.24000000000001, "end": 100.28, "text": " This neuron responds to human faces and says that this is indeed a human face."}, {"start": 100.28, "end": 101.88000000000001, "text": " So far so good."}, {"start": 101.88000000000001, "end": 108.16000000000001, "text": " Now, if we provide it with a drawing of a human face, it won't recognize it to be a face."}, {"start": 108.16, "end": 114.8, "text": " Well, so much for this multimodal idea, this one is surely not a brain in a jar."}, {"start": 114.8, "end": 118.28, "text": " But wait, we don't give up so easily around here."}, {"start": 118.28, "end": 122.0, "text": " This is not the only neural network architecture that exists."}, {"start": 122.0, "end": 123.88, "text": " Let's grab a different one."}, {"start": 123.88, "end": 130.0, "text": " This one is called OpenAI's Clip, and it is remarkably good at generalizing concepts."}, {"start": 130.0, "end": 133.92, "text": " Let's see how it can deal with the same problem."}, {"start": 133.92, "end": 138.92, "text": " Yes, this neuron responds to spiders and spider-man."}, {"start": 138.92, "end": 140.79999999999998, "text": " That's the easy part."}, {"start": 140.79999999999998, "end": 145.92, "text": " Now please hold on to your papers because now comes the hard part."}, {"start": 145.92, "end": 149.92, "text": " Drawings and comics of spiders and spider-man."}, {"start": 149.92, "end": 153.67999999999998, "text": " Yes, it responds to that too."}, {"start": 153.67999999999998, "end": 154.67999999999998, "text": " Wonderful."}, {"start": 154.67999999999998, "end": 160.83999999999997, "text": " Now comes the final boss, which is spider-related writings."}, {"start": 160.83999999999997, "end": 163.39999999999998, "text": " And it responds to that too."}, {"start": 163.4, "end": 168.44, "text": " Now of course, this doesn't mean that this neural network would be a brain in a jar, but"}, {"start": 168.44, "end": 173.20000000000002, "text": " it is a tiny bit closer to our thinking than previous architectures."}, {"start": 173.20000000000002, "end": 175.0, "text": " And now comes the best part."}, {"start": 175.0, "end": 180.68, "text": " This insight opens up the possibility for three amazing experiments."}, {"start": 180.68, "end": 182.36, "text": " Experiment number one."}, {"start": 182.36, "end": 183.36, "text": " Essence."}, {"start": 183.36, "end": 188.36, "text": " So it appears to understand the essence of a concept or a person."}, {"start": 188.36, "end": 190.44, "text": " That is absolutely amazing."}, {"start": 190.44, "end": 197.0, "text": " So I wonder if we can turn this problem around and ask what it thinks about different concepts."}, {"start": 197.0, "end": 202.84, "text": " It would be the equivalent to saying, give me all things spiders and spider-man."}, {"start": 202.84, "end": 205.4, "text": " Let's do that with Lady Gaga."}, {"start": 205.4, "end": 209.04, "text": " It says this is the essence of Lady Gaga."}, {"start": 209.04, "end": 212.04, "text": " We get the smug smile very good."}, {"start": 212.04, "end": 217.76, "text": " And it says that the essence of Jesus Christ is this and it also includes the crown of"}, {"start": 217.76, "end": 219.2, "text": " thorns."}, {"start": 219.2, "end": 221.72, "text": " So far, flying colors."}, {"start": 221.72, "end": 227.51999999999998, "text": " Now we will see some images of feelings, some of you might find some of them disturbing."}, {"start": 227.51999999999998, "end": 232.95999999999998, "text": " I think the vast majority of you humans will be just fine looking at them, but I wanted"}, {"start": 232.95999999999998, "end": 235.28, "text": " to let you know just in case."}, {"start": 235.28, "end": 239.16, "text": " So what is the essence of someone being shocked?"}, {"start": 239.16, "end": 241.2, "text": " Well, this."}, {"start": 241.2, "end": 246.04, "text": " I can attest to that this is basically me when reading this paper."}, {"start": 246.04, "end": 249.72, "text": " My eyes were popping out just like this."}, {"start": 249.72, "end": 250.72, "text": " Sleepiness."}, {"start": 250.72, "end": 254.28, "text": " Yes, that is a coffee person before coffee or right."}, {"start": 254.28, "end": 260.8, "text": " The happiness, crying and seriousness neurons also embody these feelings really well."}, {"start": 260.8, "end": 262.64, "text": " Experiment number two."}, {"start": 262.64, "end": 263.96, "text": " Adversarial attacks."}, {"start": 263.96, "end": 269.48, "text": " We know that open AI's clip responds to photos and drawings of the same thing."}, {"start": 269.48, "end": 274.32, "text": " So let's try some nasty attacks involving combining the two."}, {"start": 274.32, "end": 278.48, "text": " When we give it these images, it can classify them with ease."}, {"start": 278.48, "end": 280.04, "text": " This is an apple."}, {"start": 280.04, "end": 283.4, "text": " This is a laptop, a mug and so on."}, {"start": 283.4, "end": 285.44, "text": " Nothing too crazy going on here."}, {"start": 285.44, "end": 290.76, "text": " However, now let's prepare a nasty adversarial attack."}, {"start": 290.76, "end": 296.12, "text": " Previously sophisticated techniques were developed to fool a neural network by adding some"}, {"start": 296.12, "end": 299.32, "text": " nearly imperceptible noise to an image."}, {"start": 299.32, "end": 301.4, "text": " Let's have a look at how this works."}, {"start": 301.4, "end": 307.56, "text": " First, we present the previous neural network with an image of a bus and it will successfully"}, {"start": 307.56, "end": 311.4, "text": " tell us that yes, this is indeed a bus."}, {"start": 311.4, "end": 312.59999999999997, "text": " Of course it does."}, {"start": 312.59999999999997, "end": 319.84, "text": " Now we show it not an image of a bus, but a bus plus some carefully crafted noise that"}, {"start": 319.84, "end": 326.47999999999996, "text": " is barely perceptible that forces the neural network to misclassify it as an ostrich."}, {"start": 326.48, "end": 332.12, "text": " I will stress that this is not any kind of noise, but the kind of noise that exploits biases"}, {"start": 332.12, "end": 336.6, "text": " in the neural network, which is by no means trivial to craft."}, {"start": 336.6, "end": 343.12, "text": " So now I hope you are expecting a sophisticated adversarial attack against the wonderful clip"}, {"start": 343.12, "end": 344.6, "text": " neural network."}, {"start": 344.6, "end": 347.76, "text": " Yes, that will do."}, {"start": 347.76, "end": 349.24, "text": " Or will it?"}, {"start": 349.24, "end": 350.56, "text": " Let's see together."}, {"start": 350.56, "end": 358.16, "text": " Yes, indeed I don't know if you knew, but this is not an apple, this is a pizza."}, {"start": 358.16, "end": 359.96, "text": " And so is this one."}, {"start": 359.96, "end": 365.48, "text": " The neural network fell for these ones, but it was able to resist this sophisticated attack"}, {"start": 365.48, "end": 369.32, "text": " in the case of the coffee mug and the phone."}, {"start": 369.32, "end": 375.8, "text": " Perhaps the pizza labels had too small a footprint in an image, so let's try an even more sophisticated"}, {"start": 375.8, "end": 378.08, "text": " version of this attack."}, {"start": 378.08, "end": 383.64, "text": " Now you may think that this is a Chihuahua, but that is completely wrong because this"}, {"start": 383.64, "end": 385.88, "text": " is a pizza indeed."}, {"start": 385.88, "end": 389.64, "text": " Not a Chihuahua inside anywhere in this image."}, {"start": 389.64, "end": 390.96, "text": " No, no."}, {"start": 390.96, "end": 392.91999999999996, "text": " So what did we learn here?"}, {"start": 392.91999999999996, "end": 398.59999999999997, "text": " Well, interestingly, this clip neural network is more general than previous techniques,"}, {"start": 398.59999999999997, "end": 402.08, "text": " however, its superpowers come at a price."}, {"start": 402.08, "end": 408.4, "text": " And that price says that it can be exploited easily with simple systematic attacks."}, {"start": 408.4, "end": 411.32, "text": " That is a great lesson indeed."}, {"start": 411.32, "end": 414.84, "text": " Experiment number three, understanding feelings."}, {"start": 414.84, "end": 417.76, "text": " Now this will be one heck of an experiment."}, {"start": 417.76, "end": 422.96, "text": " We will try to answer the age old question, which is how would you describe feelings"}, {"start": 422.96, "end": 423.96, "text": " to a machine?"}, {"start": 423.96, "end": 429.91999999999996, "text": " Well, it's hard to explain such a concept, but all humans understand what being bored"}, {"start": 429.91999999999996, "end": 430.91999999999996, "text": " means."}, {"start": 430.92, "end": 436.0, "text": " However, luckily, these neural networks have neurons and they can use those to explain"}, {"start": 436.0, "end": 439.56, "text": " to us what they think about different concepts."}, {"start": 439.56, "end": 445.24, "text": " An interesting idea here is that feelings could sort of emerge as a combination of other"}, {"start": 445.24, "end": 449.44, "text": " more elementary neurons that are already understood."}, {"start": 449.44, "end": 453.28000000000003, "text": " If this sounds a little nebulous, let's go with that example."}, {"start": 453.28000000000003, "end": 457.0, "text": " What does the machine think it means that someone is bored?"}, {"start": 457.0, "end": 462.0, "text": " Well, it says that bored is relaxed plus grumpy."}, {"start": 462.0, "end": 467.04, "text": " This isn't quite the way I think about it, but not bad at all little machine."}, {"start": 467.04, "end": 469.0, "text": " I like the way you think."}, {"start": 469.0, "end": 470.68, "text": " Let's try one more."}, {"start": 470.68, "end": 474.96, "text": " How do we explain to a machine what a surprise means?"}, {"start": 474.96, "end": 479.4, "text": " Well it says surprise is celebration plus shock."}, {"start": 479.4, "end": 481.2, "text": " Nice."}, {"start": 481.2, "end": 482.72, "text": " What about madness?"}, {"start": 482.72, "end": 488.68, "text": " Let's see, evil plus serious plus a hint of mental illness."}, {"start": 488.68, "end": 493.44000000000005, "text": " And when talking about combinations, there are two more examples that I really liked."}, {"start": 493.44000000000005, "end": 499.0, "text": " If we are looking for text that embodies evil, we get something like this."}, {"start": 499.0, "end": 501.48, "text": " And now give me an evil building."}, {"start": 501.48, "end": 507.28000000000003, "text": " Oh yes, I think that works really well, but there are internet forums where we have black"}, {"start": 507.28000000000003, "end": 510.64000000000004, "text": " belt experts specializing in this very topic."}, {"start": 510.64, "end": 514.72, "text": " So if you're one of them, please let me know in the comments what you think."}, {"start": 514.72, "end": 518.04, "text": " The paper also contains a ton more information."}, {"start": 518.04, "end": 521.96, "text": " For instance, there is an experiment with the strupe effect."}, {"start": 521.96, "end": 527.96, "text": " This explores whether the neural network reacts to the meaning of the text or the color"}, {"start": 527.96, "end": 529.12, "text": " of the text."}, {"start": 529.12, "end": 533.48, "text": " I will only tease this because I would really like you to read the paper which is available"}, {"start": 533.48, "end": 536.04, "text": " below in the video description."}, {"start": 536.04, "end": 541.0, "text": " So there we go, neural networks are by no means brains in a jar."}, {"start": 541.0, "end": 543.36, "text": " They are very much computer programs."}, {"start": 543.36, "end": 550.0799999999999, "text": " However, clip has some similarities and we also found that there is a cost to death."}, {"start": 550.0799999999999, "end": 551.7199999999999, "text": " What a time to be alive."}, {"start": 551.7199999999999, "end": 556.24, "text": " What you see here is a report of this exact paper we have talked about which was made"}, {"start": 556.24, "end": 557.92, "text": " by weights and biases."}, {"start": 557.92, "end": 560.12, "text": " I put a link to it in the description."}, {"start": 560.12, "end": 561.12, "text": " Make sure to have a look."}, {"start": 561.12, "end": 564.56, "text": " I think it helps you understand this paper better."}, {"start": 564.56, "end": 569.4399999999999, "text": " If you work with learning algorithms on a regular basis, make sure to check out weights and"}, {"start": 569.4399999999999, "end": 570.4399999999999, "text": " biases."}, {"start": 570.4399999999999, "end": 575.2399999999999, "text": " Their system is designed to help you organize your experiments and it is so good it could"}, {"start": 575.2399999999999, "end": 580.52, "text": " shave off weeks or even months of work from your projects and is completely free for all"}, {"start": 580.52, "end": 584.56, "text": " individuals, academics and open source projects."}, {"start": 584.56, "end": 589.2399999999999, "text": " This really is as good as it gets and it is hardly a surprise that they are now used by"}, {"start": 589.2399999999999, "end": 593.1199999999999, "text": " over 200 companies and research institutions."}, {"start": 593.12, "end": 598.68, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 598.68, "end": 601.76, "text": " description and you can get a free demo today."}, {"start": 601.76, "end": 606.48, "text": " Our thanks to weights and biases for their long standing support and for helping us make"}, {"start": 606.48, "end": 607.88, "text": " better videos for you."}, {"start": 607.88, "end": 634.64, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=v5pOsQEOsyA
Finally, Video Stabilization That Works! 🤳
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "FuSta - Hybrid Neural Fusion for Full-frame Video Stabilization" is available here: - Paper https://alex04072000.github.io/FuSta/ - Code: https://github.com/alex04072000/FuSta - Colab: https://colab.research.google.com/drive/1l-fUzyM38KJMZyKMBWw_vu7ZUyDwgdYH?usp=sharing 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-2375579/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #stabilization #selfies
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today, we are going to talk about video stabilization. A typical application of this is when we record family memories and other cool events and sometimes the footage gets so shaky that we barely know what is going on. In these cases, video stabilization techniques can come to the rescue, which means that in goes a shaky video and out comes a smooth video. Well, that is easier said than done. Despite many years of progress, there is a great selection of previous methods that can do that, however, they suffer from one of two issues. Issue number one is cropping. This means that we get usable results, but we have to pay a great price for it, which is cropping away a great deal of the video content. Issue number two is when we get the entirety of the video. No cropping, however, the price to be paid for this is that we get lots of issues that we call visual artifacts. Unfortunately, today, when we stabilize, we have to choose our poison. It's either cropping or artifacts. Which one would you choose? That is difficult to decide, of course, because none of these two trade-offs are great. So, our question today is, can we do better? Well, the law of papers says that, of course, just one or two more papers down the line and this will be way better. So, let's see. This is what this new method is capable of. Hold on to your papers and notice that this will indeed be a full-size video so we already know that probably there will be artifacts. But, wait a second. No artifacts. Whoa! How can that be? What does this new method do that previous techniques didn't? These magical results are a combination of several things. One, the new method can estimate the motion of these objects better. Two, it removes blurred images from the videos and three, collects data from neighboring video frames more effectively. This leads to a greater understanding of the video it is looking at. Now, of course, not even this technique is perfect. Rapid camera motion may lead to warping and if you look carefully, you may find some artifacts, usually around the sides of the screen. So far, we have looked at previous methods and the new method. It seems better. That's great. But, how do we measure which one is better? Do we just look? An even harder question would be if the new method is indeed better, okay, but by how much better is it? Let's try to answer all of these questions. We can evaluate these techniques against each other in three different ways. One, we can look at the footage ourselves. We have already done that and we had to tightly hold on to our papers. It has done quite well in this test. Test number two is a quantitative test. In other words, we can mathematically define how much distortion there is in an output video, how smooth it is and more, and compare the output videos based on these metrics. In many cases, these previous techniques are quite close to each other. And now, let's unveil the new method. Whoa! It's called best or second best on six out of eight tests. This is truly remarkable, especially given that some of these competitors are from less than a year ago. That is nimble progress in machine learning research. Loving it. And the third way to test which technique is better and by how much is by conducting a user study. The authors have done that too. In this, 46 humans were called in, were shown the shaky input video, the result of a previous method and the new method and were asked three questions. Which video preserves the most content, which has fewer imperfections and which is more stable? And the results were stunning. Despite looking at many different competing techniques, the participants found the new method to be better at the very least 60% of the time on all three questions. In some cases, even 90% of the time, or higher. Praise the papers. Now, there is only one question left. If it is so much better than previous techniques, how much longer does it take to run? With one exception, these previous methods take from half a second to about 7.5 seconds per frame and this new one asks for 9.5 seconds per frame. And in return, it creates these absolutely amazing results. So, from this glorious day on, fewer or maybe no important memories will be lost due to camera shaking. What a time to be alive! Perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilebs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 8.6, "text": " Today, we are going to talk about video stabilization."}, {"start": 8.6, "end": 14.1, "text": " A typical application of this is when we record family memories and other cool events"}, {"start": 14.1, "end": 19.900000000000002, "text": " and sometimes the footage gets so shaky that we barely know what is going on."}, {"start": 19.900000000000002, "end": 24.6, "text": " In these cases, video stabilization techniques can come to the rescue,"}, {"start": 24.6, "end": 30.400000000000002, "text": " which means that in goes a shaky video and out comes a smooth video."}, {"start": 30.400000000000002, "end": 33.2, "text": " Well, that is easier said than done."}, {"start": 33.2, "end": 37.7, "text": " Despite many years of progress, there is a great selection of previous methods"}, {"start": 37.7, "end": 43.0, "text": " that can do that, however, they suffer from one of two issues."}, {"start": 43.0, "end": 45.400000000000006, "text": " Issue number one is cropping."}, {"start": 45.400000000000006, "end": 50.5, "text": " This means that we get usable results, but we have to pay a great price for it,"}, {"start": 50.5, "end": 56.6, "text": " which is cropping away a great deal of the video content."}, {"start": 56.6, "end": 60.4, "text": " Issue number two is when we get the entirety of the video."}, {"start": 60.4, "end": 66.1, "text": " No cropping, however, the price to be paid for this is that we get lots of issues"}, {"start": 66.1, "end": 68.6, "text": " that we call visual artifacts."}, {"start": 68.6, "end": 73.7, "text": " Unfortunately, today, when we stabilize, we have to choose our poison."}, {"start": 73.7, "end": 77.7, "text": " It's either cropping or artifacts."}, {"start": 77.7, "end": 82.10000000000001, "text": " Which one would you choose? That is difficult to decide, of course,"}, {"start": 82.10000000000001, "end": 84.8, "text": " because none of these two trade-offs are great."}, {"start": 84.8, "end": 88.60000000000001, "text": " So, our question today is, can we do better?"}, {"start": 88.60000000000001, "end": 94.4, "text": " Well, the law of papers says that, of course, just one or two more papers down the line"}, {"start": 94.4, "end": 96.5, "text": " and this will be way better."}, {"start": 96.5, "end": 98.2, "text": " So, let's see."}, {"start": 98.2, "end": 101.30000000000001, "text": " This is what this new method is capable of."}, {"start": 101.30000000000001, "end": 106.30000000000001, "text": " Hold on to your papers and notice that this will indeed be a full-size video"}, {"start": 106.3, "end": 110.7, "text": " so we already know that probably there will be artifacts."}, {"start": 110.7, "end": 114.8, "text": " But, wait a second. No artifacts."}, {"start": 114.8, "end": 117.5, "text": " Whoa! How can that be?"}, {"start": 117.5, "end": 121.5, "text": " What does this new method do that previous techniques didn't?"}, {"start": 121.5, "end": 125.5, "text": " These magical results are a combination of several things."}, {"start": 125.5, "end": 130.5, "text": " One, the new method can estimate the motion of these objects better."}, {"start": 130.5, "end": 134.1, "text": " Two, it removes blurred images from the videos"}, {"start": 134.1, "end": 139.1, "text": " and three, collects data from neighboring video frames more effectively."}, {"start": 139.1, "end": 143.5, "text": " This leads to a greater understanding of the video it is looking at."}, {"start": 143.5, "end": 146.9, "text": " Now, of course, not even this technique is perfect."}, {"start": 146.9, "end": 150.1, "text": " Rapid camera motion may lead to warping"}, {"start": 150.1, "end": 153.29999999999998, "text": " and if you look carefully, you may find some artifacts,"}, {"start": 153.29999999999998, "end": 156.29999999999998, "text": " usually around the sides of the screen."}, {"start": 156.29999999999998, "end": 160.5, "text": " So far, we have looked at previous methods and the new method."}, {"start": 160.5, "end": 163.4, "text": " It seems better. That's great."}, {"start": 163.4, "end": 166.9, "text": " But, how do we measure which one is better?"}, {"start": 166.9, "end": 168.4, "text": " Do we just look?"}, {"start": 168.4, "end": 172.6, "text": " An even harder question would be if the new method is indeed better,"}, {"start": 172.6, "end": 175.9, "text": " okay, but by how much better is it?"}, {"start": 175.9, "end": 178.8, "text": " Let's try to answer all of these questions."}, {"start": 178.8, "end": 183.3, "text": " We can evaluate these techniques against each other in three different ways."}, {"start": 183.3, "end": 186.5, "text": " One, we can look at the footage ourselves."}, {"start": 186.5, "end": 190.9, "text": " We have already done that and we had to tightly hold on to our papers."}, {"start": 190.9, "end": 193.70000000000002, "text": " It has done quite well in this test."}, {"start": 193.70000000000002, "end": 196.70000000000002, "text": " Test number two is a quantitative test."}, {"start": 196.70000000000002, "end": 202.70000000000002, "text": " In other words, we can mathematically define how much distortion there is in an output video,"}, {"start": 202.70000000000002, "end": 209.1, "text": " how smooth it is and more, and compare the output videos based on these metrics."}, {"start": 209.1, "end": 213.8, "text": " In many cases, these previous techniques are quite close to each other."}, {"start": 213.8, "end": 217.5, "text": " And now, let's unveil the new method."}, {"start": 217.5, "end": 223.7, "text": " Whoa! It's called best or second best on six out of eight tests."}, {"start": 223.7, "end": 230.7, "text": " This is truly remarkable, especially given that some of these competitors are from less than a year ago."}, {"start": 230.7, "end": 234.4, "text": " That is nimble progress in machine learning research."}, {"start": 234.4, "end": 242.5, "text": " Loving it. And the third way to test which technique is better and by how much is by conducting a user study."}, {"start": 242.5, "end": 244.5, "text": " The authors have done that too."}, {"start": 244.5, "end": 249.7, "text": " In this, 46 humans were called in, were shown the shaky input video,"}, {"start": 249.7, "end": 255.7, "text": " the result of a previous method and the new method and were asked three questions."}, {"start": 255.7, "end": 263.2, "text": " Which video preserves the most content, which has fewer imperfections and which is more stable?"}, {"start": 263.2, "end": 265.6, "text": " And the results were stunning."}, {"start": 265.6, "end": 271.7, "text": " Despite looking at many different competing techniques, the participants found the new method to be better"}, {"start": 271.7, "end": 276.7, "text": " at the very least 60% of the time on all three questions."}, {"start": 276.7, "end": 281.7, "text": " In some cases, even 90% of the time, or higher."}, {"start": 281.7, "end": 283.4, "text": " Praise the papers."}, {"start": 283.4, "end": 286.3, "text": " Now, there is only one question left."}, {"start": 286.3, "end": 292.4, "text": " If it is so much better than previous techniques, how much longer does it take to run?"}, {"start": 292.4, "end": 299.2, "text": " With one exception, these previous methods take from half a second to about 7.5 seconds per frame"}, {"start": 299.2, "end": 303.4, "text": " and this new one asks for 9.5 seconds per frame."}, {"start": 303.4, "end": 308.09999999999997, "text": " And in return, it creates these absolutely amazing results."}, {"start": 308.09999999999997, "end": 316.09999999999997, "text": " So, from this glorious day on, fewer or maybe no important memories will be lost due to camera shaking."}, {"start": 316.09999999999997, "end": 318.3, "text": " What a time to be alive!"}, {"start": 318.3, "end": 325.9, "text": " Perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible."}, {"start": 325.9, "end": 332.09999999999997, "text": " This gives you a faster way to build out models with more transparency into how your model is architected,"}, {"start": 332.09999999999997, "end": 335.09999999999997, "text": " how it performs, and how to debug it."}, {"start": 335.09999999999997, "end": 339.7, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 339.7, "end": 344.9, "text": " It even generates visualizations for all the model variables and gives you recommendations"}, {"start": 344.9, "end": 349.59999999999997, "text": " both during modeling and training and does all this automatically."}, {"start": 349.59999999999997, "end": 355.4, "text": " I only wish I had a tool like this when I was working on my neural networks during my PhD years."}, {"start": 355.4, "end": 362.09999999999997, "text": " Visit perceptilebs.com slash papers to easily install the free local version of their system today."}, {"start": 362.09999999999997, "end": 367.59999999999997, "text": " Our thanks to perceptilebs for their support and for helping us make better videos for you."}, {"start": 367.6, "end": 395.3, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6SJ19OgHi4w
Soap Bubble Simulations Are Now Possible! 🧼
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "A Model for Soap Film Dynamics with Evolving Thickness" is available here: https://sadashigeishida.bitbucket.io/soapfilm_with_thickness/index.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-3594979/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zonai-Fehir. Today, I will try to show you the incredible progress in computer graphics research through the lens of bubbles in computer simulations. Yes, bubbles indeed. Approximately a year ago, we covered a technique which could be used to add bubbles to an already existing fluid simulation. This paper appeared in 2012 and described a super simple method that helped us compute where bubbles appear and disappear over time. The best part of this was that this could be added after the simulation has been finalized, which is an insane value proposition. If we find ourselves yearning for some bubbles, we just add them afterwards, and if we don't like the results, we can take them out with one click. Now, simulations are not only about sites, what about sounds? In 2016, this paper did something that previously seemed impossible. It took this kind of simulation data and made sure that now we can not only add bubbles to a plain water simulation, but also simulate how they would sound. On the geometry side, a follow-up paper appeared just a year later that could simulate a handful of bubbles colliding and sticking together. Then, three years later, in 2020, Christopher Betty's group also proposed a method that was capable of simulating merging and coalescing behavior on larger-scale simulations. So, what about today's paper? Are we going even larger with hundreds of thousands or maybe even millions of bubbles? No, we are going to take just one bubble or at most a handful and have a real close look at a method that is capable of simulating these beautiful, evolving rainbow patterns. The key to this work is that it is modeling how the thickness of the surfaces changes over time. That makes all the difference. Let's look under the hood and observe how much of an effect the evolving layer thickness has on the outputs. The red color coding represents thinner and the blue shows us the thicker regions. This shows us that some regions in these bubbles are more than twice as thick as others. And there are also more extreme cases, there is a six-time difference between this and this part. You can see how the difference in thickness leads to waves of light interfering with the bubble and creating these beautiful rainbow patterns. You can't get this without a proper simulator like this one. Loving it. This variation in thicknesses is responsible for a selection of premium quality effects in a simulation, beyond surface vertices, interference patterns can also be simulated, deformation-dependent rupturing of soap films. This incredible technique can simulate all of these phenomena. And now our big question is, okay, it simulates all of these, but how well does it do that? It is good enough to fool the human eye, but how does it compare to the strictest adversary of all? Reality. I hope you know what's coming. Oh yeah, hold on to your papers because now we will let reality be our judge and compare the simulated results to that. That is one of the biggest challenges in any kind of simulation research, so let's see. This is a piece of real footage of a curved soap film surface where these rainbow patterns get convicted by an external force field. Beautiful. And now let's see the simulation. Wow, this has to be really close. Let's see them side by side and decide together. Whoa. The match in the Swirly region here is just exceptional. Now, note that even if the algorithm is 100% correct, this experiment cannot be a perfect match because not only the physics of the soap film has to be simulated correctly, but the forces that move the rainbow patterns as well. We don't have this information from the real-world footage, so the authors had to try to reproduce these forces, which is not part of the algorithm, but a property of the environment. So I would say that this footage is as close as one can possibly get. My goodness, well done. So how much do we have to pay for this in terms of computation time? If you ask me, I would pay at the very least double for this. And now comes the best part. If you have been holding onto your paper so far, now squeeze that paper because in the cheaper cases, only 4% to 7% extra computation, which is outrageous. There is this more complex case with the large deforming sphere. In this case, the new technique indeed makes a huge difference. So how much extra computation do we have to pay for this? Only 31%. 31% extra computation for this. That is a fantastic deal. You can sign me up right away. As you see, the pace of progress in computer graphics research is absolutely incredible and these simulations are just getting better and better by the day. Imagine what we will be able to do just two more papers down the line. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Zonai-Fehir."}, {"start": 4.5600000000000005, "end": 9.92, "text": " Today, I will try to show you the incredible progress in computer graphics research"}, {"start": 9.92, "end": 13.120000000000001, "text": " through the lens of bubbles in computer simulations."}, {"start": 13.84, "end": 20.0, "text": " Yes, bubbles indeed. Approximately a year ago, we covered a technique which could be used"}, {"start": 20.0, "end": 23.2, "text": " to add bubbles to an already existing fluid simulation."}, {"start": 23.2, "end": 30.8, "text": " This paper appeared in 2012 and described a super simple method that helped us compute where bubbles"}, {"start": 30.8, "end": 37.36, "text": " appear and disappear over time. The best part of this was that this could be added after the"}, {"start": 37.36, "end": 43.36, "text": " simulation has been finalized, which is an insane value proposition. If we find ourselves"}, {"start": 43.36, "end": 48.96, "text": " yearning for some bubbles, we just add them afterwards, and if we don't like the results,"}, {"start": 48.96, "end": 54.88, "text": " we can take them out with one click. Now, simulations are not only about sites,"}, {"start": 55.52, "end": 62.56, "text": " what about sounds? In 2016, this paper did something that previously seemed impossible."}, {"start": 62.56, "end": 68.48, "text": " It took this kind of simulation data and made sure that now we can not only add bubbles to"}, {"start": 68.48, "end": 79.92, "text": " a plain water simulation, but also simulate how they would sound."}, {"start": 89.44, "end": 95.52000000000001, "text": " On the geometry side, a follow-up paper appeared just a year later that could simulate a handful"}, {"start": 95.52, "end": 102.39999999999999, "text": " of bubbles colliding and sticking together. Then, three years later, in 2020, Christopher"}, {"start": 102.39999999999999, "end": 109.52, "text": " Betty's group also proposed a method that was capable of simulating merging and coalescing behavior"}, {"start": 109.52, "end": 117.36, "text": " on larger-scale simulations. So, what about today's paper? Are we going even larger with hundreds"}, {"start": 117.36, "end": 125.36, "text": " of thousands or maybe even millions of bubbles? No, we are going to take just one bubble or"}, {"start": 125.36, "end": 131.76, "text": " at most a handful and have a real close look at a method that is capable of simulating"}, {"start": 131.76, "end": 138.4, "text": " these beautiful, evolving rainbow patterns. The key to this work is that it is modeling how the"}, {"start": 138.4, "end": 145.52, "text": " thickness of the surfaces changes over time. That makes all the difference. Let's look under the"}, {"start": 145.52, "end": 150.88, "text": " hood and observe how much of an effect the evolving layer thickness has on the outputs."}, {"start": 150.88, "end": 157.28, "text": " The red color coding represents thinner and the blue shows us the thicker regions."}, {"start": 157.28, "end": 163.44, "text": " This shows us that some regions in these bubbles are more than twice as thick as others."}, {"start": 163.44, "end": 171.44, "text": " And there are also more extreme cases, there is a six-time difference between this and this"}, {"start": 171.44, "end": 178.07999999999998, "text": " part. You can see how the difference in thickness leads to waves of light interfering with the bubble"}, {"start": 178.08, "end": 184.56, "text": " and creating these beautiful rainbow patterns. You can't get this without a proper simulator"}, {"start": 184.56, "end": 191.04000000000002, "text": " like this one. Loving it. This variation in thicknesses is responsible for a selection"}, {"start": 191.04000000000002, "end": 197.60000000000002, "text": " of premium quality effects in a simulation, beyond surface vertices, interference patterns can"}, {"start": 197.60000000000002, "end": 207.52, "text": " also be simulated, deformation-dependent rupturing of soap films. This incredible technique"}, {"start": 207.52, "end": 215.36, "text": " can simulate all of these phenomena. And now our big question is, okay, it simulates all of these,"}, {"start": 215.36, "end": 222.48000000000002, "text": " but how well does it do that? It is good enough to fool the human eye, but how does it compare to"}, {"start": 222.48000000000002, "end": 230.48000000000002, "text": " the strictest adversary of all? Reality. I hope you know what's coming. Oh yeah, hold on to your"}, {"start": 230.48000000000002, "end": 236.64000000000001, "text": " papers because now we will let reality be our judge and compare the simulated results to that."}, {"start": 236.64, "end": 242.95999999999998, "text": " That is one of the biggest challenges in any kind of simulation research, so let's see."}, {"start": 243.51999999999998, "end": 249.04, "text": " This is a piece of real footage of a curved soap film surface where these rainbow patterns get"}, {"start": 249.04, "end": 255.44, "text": " convicted by an external force field. Beautiful. And now let's see the simulation."}, {"start": 258.15999999999997, "end": 264.71999999999997, "text": " Wow, this has to be really close. Let's see them side by side and decide together."}, {"start": 264.72, "end": 273.6, "text": " Whoa. The match in the Swirly region here is just exceptional. Now, note that even if the"}, {"start": 273.6, "end": 280.72, "text": " algorithm is 100% correct, this experiment cannot be a perfect match because not only the physics"}, {"start": 280.72, "end": 287.12, "text": " of the soap film has to be simulated correctly, but the forces that move the rainbow patterns as well."}, {"start": 287.68, "end": 293.12, "text": " We don't have this information from the real-world footage, so the authors had to try to reproduce"}, {"start": 293.12, "end": 298.56, "text": " these forces, which is not part of the algorithm, but a property of the environment."}, {"start": 299.2, "end": 306.48, "text": " So I would say that this footage is as close as one can possibly get. My goodness, well done."}, {"start": 307.36, "end": 313.6, "text": " So how much do we have to pay for this in terms of computation time? If you ask me, I would pay"}, {"start": 313.6, "end": 319.6, "text": " at the very least double for this. And now comes the best part. If you have been holding onto your"}, {"start": 319.6, "end": 327.92, "text": " paper so far, now squeeze that paper because in the cheaper cases, only 4% to 7% extra computation,"}, {"start": 327.92, "end": 334.8, "text": " which is outrageous. There is this more complex case with the large deforming sphere."}, {"start": 334.8, "end": 341.36, "text": " In this case, the new technique indeed makes a huge difference. So how much extra computation do"}, {"start": 341.36, "end": 351.36, "text": " we have to pay for this? Only 31%. 31% extra computation for this. That is a fantastic deal."}, {"start": 351.36, "end": 357.12, "text": " You can sign me up right away. As you see, the pace of progress in computer graphics research"}, {"start": 357.12, "end": 362.56, "text": " is absolutely incredible and these simulations are just getting better and better by the day."}, {"start": 363.12, "end": 367.12, "text": " Imagine what we will be able to do just two more papers down the line."}, {"start": 367.12, "end": 374.32, "text": " What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking"}, {"start": 374.32, "end": 382.96, "text": " for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000,"}, {"start": 382.96, "end": 391.28000000000003, "text": " RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than"}, {"start": 391.28, "end": 400.4, "text": " half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers"}, {"start": 400.4, "end": 406.71999999999997, "text": " at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations,"}, {"start": 406.71999999999997, "end": 413.35999999999996, "text": " or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU"}, {"start": 413.35999999999996, "end": 419.03999999999996, "text": " instances today. Our thanks to Lambda for their long-standing support and for helping us make better"}, {"start": 419.04, "end": 426.64000000000004, "text": " videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=4CYI6dt1ZNY
This AI Learned To Stop Time! ⏱
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Neural Scene Flow Fields for Space-Time View Synthesis of Dynamic Scenes" is available here: http://www.cs.cornell.edu/~zl548/NSFF/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-918686/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Today we get to be a paper historian and witness the amazing progress in machine learning research together and learn what is new in the world of Nerf's. But first, what is a Nerf? In March of 2020, a paper appeared describing an incredible technique by the name Mural Radiance Fields or Nerf in short. This work enables us to take a bunch of input photos and their locations, learn it, and synthesize new previously unseen views of not just the materials in the scene, but the entire scene itself. And here we are talking not only digital environments, but also real scenes as well. Now, that's quite a value proposition, especially given that it also supported refractive and reflective surfaces as well. These are both quite a challenge. However, of course, Nerf had its limitations. For instance, in many cases it had trouble with scenes with variable lighting conditions and lots of occluders. And to my delight, only five months later, in August of 2020, a follow-up paper appeared by the name Nerf in the world or Nerf W in short. This speciality was tourist attractions that a lot of people take photos of and we then have a collection of photos taken during a different time of the day and, of course, with a lot of people around. And lots of people, of course, means lots of occluders. Nerf W improved the original algorithm to excel more in cases like this. A few months later, on 2020, November 25th, another follow-up paper appeared by the name deformable Neural Radiance Fields, the Nerf. The goal here was to take a selfie video and turn it into a portrait that we can rotate around freely. This is something that the authors call a Nerfie. If we take the original Nerf technique to perform this, we see that it does not do well at all with moving things and that's where the new deformable variant really shines. And today's paper not only has some nice video results embedded in the front page, but it offers a new take on this problem and offers, quote, space-time view synthesis of dynamic scenes. Whoa! That is amazing. But what does that mean? What does this paper really do? The space-time view synthesis means that we can record a video of someone doing something. Since we are recording movement in time and there is also movement in space or in other words, the camera is moving. Both time and space are moving. And what this can do is one frees one of those variables in other words pretend as if the camera didn't move. Or two pretend as if time didn't move. Or three generate new views of the scene while movement takes place. My favorite is that we can pretend to zoom in and even better zoom out even if the recorded video looked like this. So how does this compare to previous methods? There are plenty of nerve variants around. Is this really any good? Let's find out together. This is the original nerve we already know about this and we are not surprised in the slightest that it's not so great on dynamic scenes with a lot of movement. However, what I am surprised by is that all of these previous techniques are from 2020 and all of them struggle with these cases. These comparisons are not against some ancient technology from 1985. No, no. All of them are from the same year. For instance, this previous work is called Consistent Video Depth Estimation and it is from August 2020. We showcased it in this series and marveled at all of these amazing augmented reality applications that it offered. The snowing example here was one of my favorites. And today's paper appeared just three months later in November 2020. And the authors still took the time and effort to compare against this work from just three months ago. That is fantastic. As you see, this previous method kind of works on this dog but the lack of information in some regions is quite apparent. This is still maybe usable but as soon as we transition into a more dynamic example, what do we get? Well, pandemonium. This is true for all previous methods. I cannot imagine that the new method from just a few months later could deal with this difficult case and look at that. So much better. It is still not perfect. You see that we have lost some detail but witnessing this kind of progress in just a few months is truly a sight to behold. It really consistently outperforms all of these techniques from the same year. What a time to be alive. If you, like me, find yourself yearning for more quantitative comparisons, the numbers also show that the two variants of the new proposed technique indeed outpace the competition. And it can even do one more thing. Previous video stabilization techniques were good at taking a shaky input video and creating a smoother output. However, these results often came at the cost of a great deal of cropping. Not this new work. Look at how good it is at stabilization and it does not have to crop all this data. Praise the papers. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 10.5, "text": " Today we get to be a paper historian and witness the amazing progress in machine learning"}, {"start": 10.5, "end": 15.44, "text": " research together and learn what is new in the world of Nerf's."}, {"start": 15.44, "end": 18.92, "text": " But first, what is a Nerf?"}, {"start": 18.92, "end": 25.34, "text": " In March of 2020, a paper appeared describing an incredible technique by the name Mural"}, {"start": 25.34, "end": 28.400000000000002, "text": " Radiance Fields or Nerf in short."}, {"start": 28.4, "end": 34.76, "text": " This work enables us to take a bunch of input photos and their locations, learn it, and"}, {"start": 34.76, "end": 40.7, "text": " synthesize new previously unseen views of not just the materials in the scene, but the"}, {"start": 40.7, "end": 43.239999999999995, "text": " entire scene itself."}, {"start": 43.239999999999995, "end": 49.239999999999995, "text": " And here we are talking not only digital environments, but also real scenes as well."}, {"start": 49.239999999999995, "end": 55.64, "text": " Now, that's quite a value proposition, especially given that it also supported refractive and"}, {"start": 55.64, "end": 58.28, "text": " reflective surfaces as well."}, {"start": 58.28, "end": 60.6, "text": " These are both quite a challenge."}, {"start": 60.6, "end": 64.72, "text": " However, of course, Nerf had its limitations."}, {"start": 64.72, "end": 70.08, "text": " For instance, in many cases it had trouble with scenes with variable lighting conditions"}, {"start": 70.08, "end": 73.24000000000001, "text": " and lots of occluders."}, {"start": 73.24000000000001, "end": 79.84, "text": " And to my delight, only five months later, in August of 2020, a follow-up paper appeared"}, {"start": 79.84, "end": 84.68, "text": " by the name Nerf in the world or Nerf W in short."}, {"start": 84.68, "end": 90.16000000000001, "text": " This speciality was tourist attractions that a lot of people take photos of and we then"}, {"start": 90.16000000000001, "end": 95.88000000000001, "text": " have a collection of photos taken during a different time of the day and, of course,"}, {"start": 95.88000000000001, "end": 98.4, "text": " with a lot of people around."}, {"start": 98.4, "end": 103.12, "text": " And lots of people, of course, means lots of occluders."}, {"start": 103.12, "end": 108.68, "text": " Nerf W improved the original algorithm to excel more in cases like this."}, {"start": 108.68, "end": 116.32000000000001, "text": " A few months later, on 2020, November 25th, another follow-up paper appeared by the name"}, {"start": 116.32000000000001, "end": 120.28, "text": " deformable Neural Radiance Fields, the Nerf."}, {"start": 120.28, "end": 125.16000000000001, "text": " The goal here was to take a selfie video and turn it into a portrait that we can rotate"}, {"start": 125.16000000000001, "end": 127.0, "text": " around freely."}, {"start": 127.0, "end": 131.08, "text": " This is something that the authors call a Nerfie."}, {"start": 131.08, "end": 136.08, "text": " If we take the original Nerf technique to perform this, we see that it does not do well"}, {"start": 136.08, "end": 143.76000000000002, "text": " at all with moving things and that's where the new deformable variant really shines."}, {"start": 143.76000000000002, "end": 149.60000000000002, "text": " And today's paper not only has some nice video results embedded in the front page, but"}, {"start": 149.60000000000002, "end": 156.56, "text": " it offers a new take on this problem and offers, quote, space-time view synthesis of dynamic"}, {"start": 156.56, "end": 157.56, "text": " scenes."}, {"start": 157.56, "end": 158.56, "text": " Whoa!"}, {"start": 158.56, "end": 161.32000000000002, "text": " That is amazing."}, {"start": 161.32000000000002, "end": 163.44, "text": " But what does that mean?"}, {"start": 163.44, "end": 165.76000000000002, "text": " What does this paper really do?"}, {"start": 165.76, "end": 172.48, "text": " The space-time view synthesis means that we can record a video of someone doing something."}, {"start": 172.48, "end": 178.35999999999999, "text": " Since we are recording movement in time and there is also movement in space or in other"}, {"start": 178.35999999999999, "end": 181.2, "text": " words, the camera is moving."}, {"start": 181.2, "end": 184.28, "text": " Both time and space are moving."}, {"start": 184.28, "end": 190.56, "text": " And what this can do is one frees one of those variables in other words pretend as if the"}, {"start": 190.56, "end": 194.84, "text": " camera didn't move."}, {"start": 194.84, "end": 201.84, "text": " Or two pretend as if time didn't move."}, {"start": 201.84, "end": 209.6, "text": " Or three generate new views of the scene while movement takes place."}, {"start": 209.6, "end": 217.8, "text": " My favorite is that we can pretend to zoom in and even better zoom out even if the recorded"}, {"start": 217.8, "end": 228.48000000000002, "text": " video looked like this."}, {"start": 228.48000000000002, "end": 232.16000000000003, "text": " So how does this compare to previous methods?"}, {"start": 232.16000000000003, "end": 235.12, "text": " There are plenty of nerve variants around."}, {"start": 235.12, "end": 237.48000000000002, "text": " Is this really any good?"}, {"start": 237.48000000000002, "end": 239.52, "text": " Let's find out together."}, {"start": 239.52, "end": 244.76000000000002, "text": " This is the original nerve we already know about this and we are not surprised in the"}, {"start": 244.76, "end": 249.48, "text": " slightest that it's not so great on dynamic scenes with a lot of movement."}, {"start": 249.48, "end": 256.56, "text": " However, what I am surprised by is that all of these previous techniques are from 2020"}, {"start": 256.56, "end": 259.36, "text": " and all of them struggle with these cases."}, {"start": 259.36, "end": 264.15999999999997, "text": " These comparisons are not against some ancient technology from 1985."}, {"start": 264.15999999999997, "end": 265.4, "text": " No, no."}, {"start": 265.4, "end": 267.84, "text": " All of them are from the same year."}, {"start": 267.84, "end": 272.92, "text": " For instance, this previous work is called Consistent Video Depth Estimation and it is"}, {"start": 272.92, "end": 275.36, "text": " from August 2020."}, {"start": 275.36, "end": 280.42, "text": " We showcased it in this series and marveled at all of these amazing augmented reality"}, {"start": 280.42, "end": 282.52000000000004, "text": " applications that it offered."}, {"start": 282.52000000000004, "end": 286.76, "text": " The snowing example here was one of my favorites."}, {"start": 286.76, "end": 292.56, "text": " And today's paper appeared just three months later in November 2020."}, {"start": 292.56, "end": 297.8, "text": " And the authors still took the time and effort to compare against this work from just three"}, {"start": 297.8, "end": 299.32, "text": " months ago."}, {"start": 299.32, "end": 301.6, "text": " That is fantastic."}, {"start": 301.6, "end": 306.88, "text": " As you see, this previous method kind of works on this dog but the lack of information in"}, {"start": 306.88, "end": 309.6, "text": " some regions is quite apparent."}, {"start": 309.6, "end": 316.12, "text": " This is still maybe usable but as soon as we transition into a more dynamic example, what"}, {"start": 316.12, "end": 317.12, "text": " do we get?"}, {"start": 317.12, "end": 319.8, "text": " Well, pandemonium."}, {"start": 319.8, "end": 323.24, "text": " This is true for all previous methods."}, {"start": 323.24, "end": 328.64000000000004, "text": " I cannot imagine that the new method from just a few months later could deal with this"}, {"start": 328.64, "end": 333.2, "text": " difficult case and look at that."}, {"start": 333.2, "end": 334.71999999999997, "text": " So much better."}, {"start": 334.71999999999997, "end": 336.24, "text": " It is still not perfect."}, {"start": 336.24, "end": 341.2, "text": " You see that we have lost some detail but witnessing this kind of progress in just a"}, {"start": 341.2, "end": 344.68, "text": " few months is truly a sight to behold."}, {"start": 344.68, "end": 350.03999999999996, "text": " It really consistently outperforms all of these techniques from the same year."}, {"start": 350.03999999999996, "end": 351.91999999999996, "text": " What a time to be alive."}, {"start": 351.91999999999996, "end": 357.64, "text": " If you, like me, find yourself yearning for more quantitative comparisons, the numbers"}, {"start": 357.64, "end": 365.0, "text": " also show that the two variants of the new proposed technique indeed outpace the competition."}, {"start": 365.0, "end": 368.52, "text": " And it can even do one more thing."}, {"start": 368.52, "end": 373.88, "text": " Previous video stabilization techniques were good at taking a shaky input video and creating"}, {"start": 373.88, "end": 375.52, "text": " a smoother output."}, {"start": 375.52, "end": 381.91999999999996, "text": " However, these results often came at the cost of a great deal of cropping."}, {"start": 381.91999999999996, "end": 383.32, "text": " Not this new work."}, {"start": 383.32, "end": 389.48, "text": " Look at how good it is at stabilization and it does not have to crop all this data."}, {"start": 389.48, "end": 391.08, "text": " Praise the papers."}, {"start": 391.08, "end": 394.52, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 394.52, "end": 400.48, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 400.48, "end": 408.24, "text": " They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto"}, {"start": 408.24, "end": 414.84000000000003, "text": " your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 414.84000000000003, "end": 420.36, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 420.36, "end": 426.76, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 426.76, "end": 428.6, "text": " workstations or servers."}, {"start": 428.6, "end": 434.56, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 434.56, "end": 435.56, "text": " today."}, {"start": 435.56, "end": 440.24, "text": " Thanks to Lambda for their long-standing support and for helping us make better videos for"}, {"start": 440.24, "end": 441.24, "text": " you."}, {"start": 441.24, "end": 468.68, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=RUDWn_obddI
OpenAI Outperforms Some Humans In Article Summarization! 📜
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/openai/published-work/Learning-Dexterity-End-to-End--VmlldzoxMTUyMDQ 📝 The paper "Learning to Summarize with Human Feedback" is available here: https://openai.com/blog/learning-to-summarize-with-human-feedback/ Reddit links to the showcased posts: 1. https://www.reddit.com/r/AskAcademia/comments/lf7uk4/submitting_a_paper_independent_of_my_post_doc/ 2. https://www.reddit.com/r/AskAcademia/comments/l988py/british_or_american_phd/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Thumbnail background image credit: https://pixabay.com/images/id-1989152/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajor Naifahir. This paper will not have the visual fireworks that you see in many of our videos. Oftentimes you get ice cream for the eyes, but today you'll get an ice cream for the mind. And when I read this paper, I almost fell off the chair and I think this work teaches us important lessons and I hope you will appreciate them too. So with that, let's talk about AI's and dealing with text. This research field is improving at an incredible pace. For instance, four years ago, in 2017, scientists at OpenAI embarked on an AI project where they wanted to show a neural network a bunch of Amazon product reviews and wanted to teach it to be able to generate new ones or continue a review when given one. Upon closer inspection, they noticed that the neural network has built up a knowledge of not only language, but also learned that it needs to create a state of the art sentiment detector as well. This means that the AI recognized that in order to be able to continue a review, it needs to be able to understand English and efficiently detect whether the review seems positive or negative. This new work is about text summarization and it really is something else. If you read, read it, the popular online discussion website, and encounter a longer post, you may find a short summary, a TRDR of the same post written by a fellow human. This is good for not only the other readers who are in a hurry, but it is less obvious that it is also good for something else. And now, hold on to your papers because these summaries also provide fertile grounds for a learning algorithm to read a piece of long text, and it's short summary and learn how the two relate to each other. This means that it can be used as training data and can be fed to a learning algorithm. Yum! And the point is that if we give enough of these pairs to these learning algorithms, they will learn to summarize other Reddit posts. So, let's see how well it performs. First, this method learned on about 100,000 well-curated Reddit posts and was also tested on other posts that it hadn't seen before. It was asked to summarize this post from the relationship advice subreddit and let's see how well it did. If you feel like reading the text, you can pause the video here, or if you feel like embracing the TRDR spirit, just carry on and look at these two summarizations. One of these is written by a human and the other one by this new summarization technique. Do you know which is which? Please stop the video and let me know in the comments below. Thank you. So this was written by a human and this by the new AI. And while, of course, this is subjective, I would say that the AI written one feels at the very least as good as the human summary and I can't wait to have a look at the more principle deviation in the paper. Let's see. The higher we go here, the higher the probability of a human favoring the AI written summary to a human written one. And we have smaller AI models on the left and bigger ones to the right. This is the 50% reference line below it, people tend to favor the human's version and if it can get above the 50% line, the AI does a better job than human written TRDRs in the data set. Here are two proposed models. This one significantly outperforms and this other one is a better match. However, whoa, look at that. The authors also propose a human feedback model that even for the smallest model, handily outperforms human written TRDRs and as we grow the AI model, it gets even better than that. Now that's incredible and this is when I almost fell off the chair when reading this paper. But we are not done yet, not even close. Don't forget this AI was trained on Reddit and was also tested on Reddit. So our next question is of course, can it do anything else? How general is the knowledge that it gained? What if we give it a full news article from somewhere else outside of Reddit? Let's see how it performs. Hmm, of course this is also subjective but I would say both are quite good. The human written summary provides a little more information while the AI written one captures the essence of the article and does it very concisely. Great job. So let's see the same graph for summarizing these articles outside of Reddit. I don't expect the AI to perform as well as with the Reddit posts as it is outside the comfort zone but my goodness, this still performs nearly as well as humans. That means that it indeed derived general knowledge from a really narrow training set which is absolutely amazing. Now ironically you see this lead three technique dominating both humans and the AI. What could that be? Some unpublished super intelligent technique? Well, I will have to disappoint. This is not a super sophisticated technique but a dead simple one. So simple that it is just taking the first three sentences of the article which humans seem to prefer a great deal. But note that this simple lead three technique only works for a narrow domain while the AI has learned the English language probably knows about sentiment and a lot of other things that can be used elsewhere. And now the two most impressive things from the paper in my opinion, one, this is not a neural network but a reinforcement learning algorithm that learns from human feedback. A similar technique has been used by DeepMind and other research labs to play video games or control drones and it is really cool to see them excel in text summarization too. Two, it learned from humans but derived so much knowledge from these scores that over time it outperformed its own teacher. And the teacher here is not human beings in general but people who write TRDRs along their posts on Reddit. That truly feels like something straight out of a science fiction movie. What a time to be alive. Now of course not even this technique is perfect. This human versus AI preference thing is just one way of measuring the quality of the summary. There are more sophisticated methods that involve coverage, coherence, accuracy and more. In some of these measurements the AI does not perform as well. But just imagine what this will be able to do two more papers down the line. This episode has been supported by weights and biases. In this post they show you how open AI's prestigious robotics team uses their tool to teach a robot hand to dexterously manipulate a Rubik's cube. During my PhD studies I trained a ton of neural networks which were used in our experiments. However over time there was just too much data in our repositories and what I am looking for is not data but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions including open AI, Toyota Research, GitHub and more. And get this, weights and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajor Naifahir."}, {"start": 4.88, "end": 10.24, "text": " This paper will not have the visual fireworks that you see in many of our videos."}, {"start": 10.24, "end": 16.240000000000002, "text": " Oftentimes you get ice cream for the eyes, but today you'll get an ice cream for the mind."}, {"start": 16.240000000000002, "end": 22.0, "text": " And when I read this paper, I almost fell off the chair and I think this work teaches us"}, {"start": 22.0, "end": 28.96, "text": " important lessons and I hope you will appreciate them too. So with that, let's talk about AI's"}, {"start": 28.96, "end": 34.64, "text": " and dealing with text. This research field is improving at an incredible pace."}, {"start": 34.64, "end": 41.68, "text": " For instance, four years ago, in 2017, scientists at OpenAI embarked on an AI project"}, {"start": 41.68, "end": 46.16, "text": " where they wanted to show a neural network a bunch of Amazon product reviews"}, {"start": 46.16, "end": 52.32, "text": " and wanted to teach it to be able to generate new ones or continue a review when given one."}, {"start": 52.88, "end": 57.44, "text": " Upon closer inspection, they noticed that the neural network has built up"}, {"start": 57.44, "end": 63.76, "text": " a knowledge of not only language, but also learned that it needs to create a state of the art"}, {"start": 63.76, "end": 69.75999999999999, "text": " sentiment detector as well. This means that the AI recognized that in order to be able to"}, {"start": 69.75999999999999, "end": 75.75999999999999, "text": " continue a review, it needs to be able to understand English and efficiently detect"}, {"start": 75.75999999999999, "end": 82.32, "text": " whether the review seems positive or negative. This new work is about text summarization"}, {"start": 82.32, "end": 88.72, "text": " and it really is something else. If you read, read it, the popular online discussion website,"}, {"start": 88.72, "end": 95.91999999999999, "text": " and encounter a longer post, you may find a short summary, a TRDR of the same post written by a"}, {"start": 95.91999999999999, "end": 102.32, "text": " fellow human. This is good for not only the other readers who are in a hurry, but it is less obvious"}, {"start": 102.32, "end": 108.32, "text": " that it is also good for something else. And now, hold on to your papers because these"}, {"start": 108.32, "end": 114.55999999999999, "text": " summaries also provide fertile grounds for a learning algorithm to read a piece of long text,"}, {"start": 114.55999999999999, "end": 121.11999999999999, "text": " and it's short summary and learn how the two relate to each other. This means that it can be"}, {"start": 121.11999999999999, "end": 128.0, "text": " used as training data and can be fed to a learning algorithm. Yum! And the point is that if we give"}, {"start": 128.0, "end": 133.35999999999999, "text": " enough of these pairs to these learning algorithms, they will learn to summarize other Reddit posts."}, {"start": 133.36, "end": 140.48000000000002, "text": " So, let's see how well it performs. First, this method learned on about 100,000"}, {"start": 140.48000000000002, "end": 145.84, "text": " well-curated Reddit posts and was also tested on other posts that it hadn't seen before."}, {"start": 145.84, "end": 152.32000000000002, "text": " It was asked to summarize this post from the relationship advice subreddit and let's see how well"}, {"start": 152.32000000000002, "end": 158.08, "text": " it did. If you feel like reading the text, you can pause the video here, or if you feel like"}, {"start": 158.08, "end": 164.96, "text": " embracing the TRDR spirit, just carry on and look at these two summarizations. One of these is"}, {"start": 164.96, "end": 171.68, "text": " written by a human and the other one by this new summarization technique. Do you know which is which?"}, {"start": 172.4, "end": 178.48000000000002, "text": " Please stop the video and let me know in the comments below. Thank you. So this was written by a"}, {"start": 178.48000000000002, "end": 186.0, "text": " human and this by the new AI. And while, of course, this is subjective, I would say that the AI"}, {"start": 186.0, "end": 193.2, "text": " written one feels at the very least as good as the human summary and I can't wait to have a look"}, {"start": 193.2, "end": 199.68, "text": " at the more principle deviation in the paper. Let's see. The higher we go here, the higher the"}, {"start": 199.68, "end": 206.88, "text": " probability of a human favoring the AI written summary to a human written one. And we have smaller"}, {"start": 206.88, "end": 214.56, "text": " AI models on the left and bigger ones to the right. This is the 50% reference line below it,"}, {"start": 214.56, "end": 221.28, "text": " people tend to favor the human's version and if it can get above the 50% line, the AI does a"}, {"start": 221.28, "end": 228.16, "text": " better job than human written TRDRs in the data set. Here are two proposed models. This one"}, {"start": 228.16, "end": 239.68, "text": " significantly outperforms and this other one is a better match. However, whoa, look at that."}, {"start": 239.68, "end": 245.68, "text": " The authors also propose a human feedback model that even for the smallest model,"}, {"start": 245.68, "end": 253.44, "text": " handily outperforms human written TRDRs and as we grow the AI model, it gets even better than that."}, {"start": 254.16, "end": 259.52, "text": " Now that's incredible and this is when I almost fell off the chair when reading this paper."}, {"start": 260.48, "end": 267.84000000000003, "text": " But we are not done yet, not even close. Don't forget this AI was trained on Reddit and was also"}, {"start": 267.84, "end": 275.11999999999995, "text": " tested on Reddit. So our next question is of course, can it do anything else? How general is"}, {"start": 275.11999999999995, "end": 281.2, "text": " the knowledge that it gained? What if we give it a full news article from somewhere else outside"}, {"start": 281.2, "end": 290.32, "text": " of Reddit? Let's see how it performs. Hmm, of course this is also subjective but I would say both"}, {"start": 290.32, "end": 296.23999999999995, "text": " are quite good. The human written summary provides a little more information while the AI"}, {"start": 296.24, "end": 302.96000000000004, "text": " written one captures the essence of the article and does it very concisely. Great job."}, {"start": 303.6, "end": 310.32, "text": " So let's see the same graph for summarizing these articles outside of Reddit. I don't expect the AI"}, {"start": 310.32, "end": 318.08, "text": " to perform as well as with the Reddit posts as it is outside the comfort zone but my goodness,"}, {"start": 318.08, "end": 324.88, "text": " this still performs nearly as well as humans. That means that it indeed derived general knowledge"}, {"start": 324.88, "end": 332.24, "text": " from a really narrow training set which is absolutely amazing. Now ironically you see this lead"}, {"start": 332.24, "end": 340.32, "text": " three technique dominating both humans and the AI. What could that be? Some unpublished super"}, {"start": 340.32, "end": 347.36, "text": " intelligent technique? Well, I will have to disappoint. This is not a super sophisticated technique"}, {"start": 347.36, "end": 353.92, "text": " but a dead simple one. So simple that it is just taking the first three sentences of the article"}, {"start": 353.92, "end": 360.08000000000004, "text": " which humans seem to prefer a great deal. But note that this simple lead three technique only"}, {"start": 360.08000000000004, "end": 366.88, "text": " works for a narrow domain while the AI has learned the English language probably knows about sentiment"}, {"start": 366.88, "end": 373.12, "text": " and a lot of other things that can be used elsewhere. And now the two most impressive things from"}, {"start": 373.12, "end": 380.48, "text": " the paper in my opinion, one, this is not a neural network but a reinforcement learning algorithm"}, {"start": 380.48, "end": 386.56, "text": " that learns from human feedback. A similar technique has been used by DeepMind and other research"}, {"start": 386.56, "end": 393.92, "text": " labs to play video games or control drones and it is really cool to see them excel in text summarization"}, {"start": 393.92, "end": 401.04, "text": " too. Two, it learned from humans but derived so much knowledge from these scores that over time"}, {"start": 401.04, "end": 407.36, "text": " it outperformed its own teacher. And the teacher here is not human beings in general but people"}, {"start": 407.36, "end": 414.24, "text": " who write TRDRs along their posts on Reddit. That truly feels like something straight out of a"}, {"start": 414.24, "end": 420.8, "text": " science fiction movie. What a time to be alive. Now of course not even this technique is perfect."}, {"start": 420.8, "end": 426.72, "text": " This human versus AI preference thing is just one way of measuring the quality of the summary."}, {"start": 426.72, "end": 433.04, "text": " There are more sophisticated methods that involve coverage, coherence, accuracy and more."}, {"start": 433.04, "end": 439.76000000000005, "text": " In some of these measurements the AI does not perform as well. But just imagine what this will be"}, {"start": 439.76000000000005, "end": 446.16, "text": " able to do two more papers down the line. This episode has been supported by weights and biases."}, {"start": 446.16, "end": 452.0, "text": " In this post they show you how open AI's prestigious robotics team uses their tool to"}, {"start": 452.0, "end": 458.24, "text": " teach a robot hand to dexterously manipulate a Rubik's cube. During my PhD studies I trained"}, {"start": 458.24, "end": 463.84000000000003, "text": " a ton of neural networks which were used in our experiments. However over time there was just"}, {"start": 463.84000000000003, "end": 470.72, "text": " too much data in our repositories and what I am looking for is not data but insight. And that's"}, {"start": 470.72, "end": 476.16, "text": " exactly how weights and biases helps you by organizing your experiments. It is used by more than"}, {"start": 476.16, "end": 483.12, "text": " 200 companies and research institutions including open AI, Toyota Research, GitHub and more."}, {"start": 483.12, "end": 490.16, "text": " And get this, weights and biases is free for all individuals, academics and open source projects."}, {"start": 490.16, "end": 496.88, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video description"}, {"start": 496.88, "end": 502.48, "text": " and you can get a free demo today. Our thanks to weights and biases for their long standing support"}, {"start": 502.48, "end": 507.36, "text": " and for helping us make better videos for you. Thanks for watching and for your generous support"}, {"start": 507.36, "end": 517.36, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=jjfDO2pWpys
DeepMind’s AI Watches YouTube and Learns To Play! ▶️🤖
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/latentspace/published-work/The-Science-of-Debugging-with-W-B-Reports--Vmlldzo4OTI3Ng 📝 The paper "Playing hard exploration games by watching YouTube" is available here: Paper: https://papers.nips.cc/paper/7557-playing-hard-exploration-games-by-watching-youtube.pdf Gameplay videos: https://www.youtube.com/playlist?list=PLZuOGGtntKlaOoq_8wk5aKgE_u_Qcpqhu 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karoji Zornaifahir, between 2013 and 2015 deep-mind worked on an incredible learning algorithm by the name Deep Reinforcement Learning. This technique locked at the pixels of the game, was given a controller and played much like a human wood with the exception that it learned to play some Atari games on a superhuman level. I have tried to train it a few years ago and would like to invite you for a marvelous journey to see what happened. When it starts learning to play an old game, Atari Breakout, at first the algorithm loses all of its lives without any signs of intelligent action. If we wait a bit, it becomes better at playing the game, roughly matching the skill level of an adapt player. But here's the catch, if we wait for longer, we get something absolutely spectacular. Over time, it learns to play like a pro and finds out that the best way to win the game is digging a tunnel through the bricks and hit them from behind. This technique is a combination of a neural network that processes the visual data that we see on the screen and a reinforcement learner that comes up with the gameplay related decisions. This is an amazing algorithm, a true breakthrough in AI research. A key point in this work was that the problem formulation here enabled us to measure our progress easily. We hit one break, we get some points, so do a lot of that. Lose a few lives, the game ends, so don't do that. Easy enough. But there are other exploration-based games like Montezuma's Revenge or Pitfall that it was not good at. And man, these games are a nightmare for any AI because there is no score, or at the very least it's hard to define how well we are doing. Because there are no scores, it is hard to motivate an AI to do anything at all other than just wander around aimlessly. If no one tells us if we are doing well or not, which way do we go? Explore this space or go to the next one. How do we solve all this? And with that, let's discuss the state of play in AI's playing difficult exploration-based computer games. And I think you will love to see how far we have come since. First, there is a previous line of work that infused these agents with a very human-like property, Curiosity. That agent was able to do much, much better at these games and then got addicted to the TV. But that's a different story. Note that the TV problem has been remedied since. And this new method attempts to solve hard exploration games by watching YouTube videos of humans playing the game. And learning from that, as you see, it just rips through these levels in Montezuma's revenge and other games too. So I wonder how does all this magic happen? How did this agent learn to explore? Well, it has three things going for it that really makes this work. One, the skeptical scholar would say that all this takes is just copy-pasting what it saw from the human player. Also, imitation learning is not new, which is a point that we will address in a moment. So, why bother with this? Now, hold on to your papers and observe as it seems noticeably less efficient than the human teacher was. Until we realize that this is not the human player and this is not the AI, but the other way around. Look, it was so observant and took away so much from the human demonstrations that in the end, it became even more efficient than its human teacher. Whoa! Absolutely amazing. And while we are here, I would like to dissect this copy-paste argument. You see, it has an understanding of the game and does not just copy the human demonstrator, but even if it just copied what it saw, it would not be so easy because the AI only sees images and it has to translate how the images change in response to us pressing buttons on the controller. We might also encounter the same level, but at a different time, and we have to understand how to vanquish an opponent and how to perform that. Two, nobody hooked the agent into the game information, which is huge. This means that it doesn't know what buttons are pressed on the controller, no internal numbers or game states are given to it, and most importantly, it is also not given the score of the game. We discussed how difficult this makes everything. Unfortunately, this means that there is no easy way out. It really has to understand what it sees and mine out the relevant information from each of these videos. And as you see, it does that with flying colors. Loving it. And three, it can handle the domain gap. Previous imitation learning methods did not deal with that too well. So, what does that mean? Let's look at this latent space together and find out. This is what a latent space looks like if we just embed the pixels that we see in the videos. Don't worry, I'll tell you in a moment what that is. Here, the clusters are nicely crammed up away from each other, so that's probably good, right? Well, in this problem, not so much. The latent space means a place where similar kinds of data are meant to be close to each other. These are snippets of the demonstration videos that the clusters relate to. Let's test that together. Do you think these images are similar? Yes, most of us humans would say that these are quite similar. In fact, they are nearly the same. So, is this a good latent space embedding? No, not in the slightest. This data is similar. Therefore, these should be close to each other, but this previous technique did not recognize that because these images have slightly different colors, aspect ratios. This has a text overlay, but we all understand that despite all that, we are looking at the same game through different windows. So, does the new technique recognize that? Oh, yes, beautiful. Praise the papers. Similar game states are now close to each other. We can align them properly and therefore we can learn more easily from them. This is one of the reasons why it can play so well. So, there you go. These new AI agents can look at how we perform complex exploration games and learn so well from us that in the end they do even better than we do. And now, to get them to write some amazing papers for us, or you know, two-minute papers episodes. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their tool to check and visualize what your neural network is learning and even more importantly, a case study on how to find bugs in your system and fix them. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that weights and biases is free for all individuals, academics and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 7.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karoji Zornaifahir, between 2013 and 2015"}, {"start": 7.6000000000000005, "end": 14.0, "text": " deep-mind worked on an incredible learning algorithm by the name Deep Reinforcement Learning."}, {"start": 14.0, "end": 20.88, "text": " This technique locked at the pixels of the game, was given a controller and played much like a human wood"}, {"start": 20.88, "end": 26.72, "text": " with the exception that it learned to play some Atari games on a superhuman level."}, {"start": 26.72, "end": 32.48, "text": " I have tried to train it a few years ago and would like to invite you for a marvelous journey"}, {"start": 32.48, "end": 38.56, "text": " to see what happened. When it starts learning to play an old game, Atari Breakout,"}, {"start": 38.56, "end": 44.239999999999995, "text": " at first the algorithm loses all of its lives without any signs of intelligent action."}, {"start": 44.879999999999995, "end": 50.64, "text": " If we wait a bit, it becomes better at playing the game, roughly matching the skill level of an"}, {"start": 50.64, "end": 59.04, "text": " adapt player. But here's the catch, if we wait for longer, we get something absolutely spectacular."}, {"start": 59.6, "end": 65.04, "text": " Over time, it learns to play like a pro and finds out that the best way to win the game"}, {"start": 65.04, "end": 70.96000000000001, "text": " is digging a tunnel through the bricks and hit them from behind. This technique is a combination"}, {"start": 70.96000000000001, "end": 77.04, "text": " of a neural network that processes the visual data that we see on the screen and a reinforcement"}, {"start": 77.04, "end": 83.44000000000001, "text": " learner that comes up with the gameplay related decisions. This is an amazing algorithm,"}, {"start": 83.44000000000001, "end": 89.84, "text": " a true breakthrough in AI research. A key point in this work was that the problem formulation here"}, {"start": 89.84, "end": 97.12, "text": " enabled us to measure our progress easily. We hit one break, we get some points, so do a lot of that."}, {"start": 97.68, "end": 104.64000000000001, "text": " Lose a few lives, the game ends, so don't do that. Easy enough. But there are other"}, {"start": 104.64, "end": 109.76, "text": " exploration-based games like Montezuma's Revenge or Pitfall that it was not good at."}, {"start": 110.48, "end": 117.76, "text": " And man, these games are a nightmare for any AI because there is no score, or at the very least"}, {"start": 117.76, "end": 123.68, "text": " it's hard to define how well we are doing. Because there are no scores, it is hard to motivate an AI"}, {"start": 123.68, "end": 130.0, "text": " to do anything at all other than just wander around aimlessly. If no one tells us if we are doing"}, {"start": 130.0, "end": 138.0, "text": " well or not, which way do we go? Explore this space or go to the next one. How do we solve all"}, {"start": 138.0, "end": 144.8, "text": " this? And with that, let's discuss the state of play in AI's playing difficult exploration-based"}, {"start": 144.8, "end": 149.6, "text": " computer games. And I think you will love to see how far we have come since."}, {"start": 150.48, "end": 156.4, "text": " First, there is a previous line of work that infused these agents with a very human-like property,"}, {"start": 156.4, "end": 164.64000000000001, "text": " Curiosity. That agent was able to do much, much better at these games and then got addicted to the"}, {"start": 164.64000000000001, "end": 170.24, "text": " TV. But that's a different story. Note that the TV problem has been remedied since."}, {"start": 170.96, "end": 178.24, "text": " And this new method attempts to solve hard exploration games by watching YouTube videos of humans"}, {"start": 178.24, "end": 184.88, "text": " playing the game. And learning from that, as you see, it just rips through these levels in Montezuma's"}, {"start": 184.88, "end": 192.64, "text": " revenge and other games too. So I wonder how does all this magic happen? How did this agent learn"}, {"start": 192.64, "end": 199.76, "text": " to explore? Well, it has three things going for it that really makes this work. One, the skeptical"}, {"start": 199.76, "end": 205.2, "text": " scholar would say that all this takes is just copy-pasting what it saw from the human player."}, {"start": 205.76, "end": 211.68, "text": " Also, imitation learning is not new, which is a point that we will address in a moment. So,"}, {"start": 211.68, "end": 219.12, "text": " why bother with this? Now, hold on to your papers and observe as it seems noticeably less efficient"}, {"start": 219.12, "end": 227.36, "text": " than the human teacher was. Until we realize that this is not the human player and this is not the AI,"}, {"start": 227.36, "end": 234.48000000000002, "text": " but the other way around. Look, it was so observant and took away so much from the human"}, {"start": 234.48, "end": 242.16, "text": " demonstrations that in the end, it became even more efficient than its human teacher. Whoa!"}, {"start": 242.16, "end": 247.92, "text": " Absolutely amazing. And while we are here, I would like to dissect this copy-paste argument."}, {"start": 247.92, "end": 253.51999999999998, "text": " You see, it has an understanding of the game and does not just copy the human demonstrator,"}, {"start": 253.83999999999997, "end": 261.2, "text": " but even if it just copied what it saw, it would not be so easy because the AI only sees images"}, {"start": 261.2, "end": 267.44, "text": " and it has to translate how the images change in response to us pressing buttons on the controller."}, {"start": 268.15999999999997, "end": 274.24, "text": " We might also encounter the same level, but at a different time, and we have to understand how to"}, {"start": 274.24, "end": 280.64, "text": " vanquish an opponent and how to perform that. Two, nobody hooked the agent into the game"}, {"start": 280.64, "end": 285.84, "text": " information, which is huge. This means that it doesn't know what buttons are pressed on the"}, {"start": 285.84, "end": 292.96, "text": " controller, no internal numbers or game states are given to it, and most importantly, it is also"}, {"start": 292.96, "end": 299.84, "text": " not given the score of the game. We discussed how difficult this makes everything. Unfortunately,"}, {"start": 299.84, "end": 305.76, "text": " this means that there is no easy way out. It really has to understand what it sees and mine out"}, {"start": 305.76, "end": 312.15999999999997, "text": " the relevant information from each of these videos. And as you see, it does that with flying colors."}, {"start": 312.16, "end": 319.68, "text": " Loving it. And three, it can handle the domain gap. Previous imitation learning methods did not"}, {"start": 319.68, "end": 325.92, "text": " deal with that too well. So, what does that mean? Let's look at this latent space together and find"}, {"start": 325.92, "end": 332.16, "text": " out. This is what a latent space looks like if we just embed the pixels that we see in the videos."}, {"start": 332.88, "end": 338.16, "text": " Don't worry, I'll tell you in a moment what that is. Here, the clusters are nicely crammed up"}, {"start": 338.16, "end": 345.12, "text": " away from each other, so that's probably good, right? Well, in this problem, not so much."}, {"start": 345.12, "end": 350.88000000000005, "text": " The latent space means a place where similar kinds of data are meant to be close to each other."}, {"start": 351.76000000000005, "end": 356.72, "text": " These are snippets of the demonstration videos that the clusters relate to. Let's test that"}, {"start": 356.72, "end": 364.88, "text": " together. Do you think these images are similar? Yes, most of us humans would say that these are"}, {"start": 364.88, "end": 371.68, "text": " quite similar. In fact, they are nearly the same. So, is this a good latent space embedding?"}, {"start": 372.88, "end": 379.6, "text": " No, not in the slightest. This data is similar. Therefore, these should be close to each other,"}, {"start": 379.6, "end": 385.12, "text": " but this previous technique did not recognize that because these images have slightly different colors,"}, {"start": 385.84, "end": 392.56, "text": " aspect ratios. This has a text overlay, but we all understand that despite all that,"}, {"start": 392.56, "end": 399.12, "text": " we are looking at the same game through different windows. So, does the new technique recognize that?"}, {"start": 400.32, "end": 407.2, "text": " Oh, yes, beautiful. Praise the papers. Similar game states are now close to each other."}, {"start": 407.2, "end": 413.36, "text": " We can align them properly and therefore we can learn more easily from them. This is one of the"}, {"start": 413.36, "end": 420.24, "text": " reasons why it can play so well. So, there you go. These new AI agents can look at how we perform"}, {"start": 420.24, "end": 427.52, "text": " complex exploration games and learn so well from us that in the end they do even better than we do."}, {"start": 428.24, "end": 434.56, "text": " And now, to get them to write some amazing papers for us, or you know, two-minute papers episodes."}, {"start": 435.2, "end": 441.28000000000003, "text": " What a time to be alive. This episode has been supported by weights and biases. In this post,"}, {"start": 441.28000000000003, "end": 447.12, "text": " they show you how to use their tool to check and visualize what your neural network is learning"}, {"start": 447.12, "end": 452.8, "text": " and even more importantly, a case study on how to find bugs in your system and fix them."}, {"start": 452.8, "end": 457.52, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 457.52, "end": 463.2, "text": " Their system is designed to save you a ton of time and money and it is actively used in projects"}, {"start": 463.2, "end": 469.84000000000003, "text": " at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that"}, {"start": 469.84, "end": 477.76, "text": " weights and biases is free for all individuals, academics and open source projects. It really is as good as it gets."}, {"start": 477.76, "end": 484.47999999999996, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video description"}, {"start": 484.47999999999996, "end": 489.76, "text": " and you can get a free demo today. Our thanks to weights and biases for their long-standing support"}, {"start": 489.76, "end": 494.71999999999997, "text": " and for helping us make better videos for you. Thanks for watching and for your generous support"}, {"start": 494.72, "end": 502.96000000000004, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=bnm7skt2aYE
An AI That Makes Dog Photos - But How? 🐶
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/ayush-thakur/ada/reports/Train-Generative-Adversarial-Network-With-Limited-Data--Vmlldzo1NDYyMjA 📝 The paper "Training Generative Adversarial Networks with Limited Data" is available here: Paper: https://arxiv.org/abs/2006.06676 Pytorch implementation: https://github.com/NVlabs/stylegan2-ada-pytorch 📝 My thesis with the quote is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-learning-and-synthesis/ Unofficial StyleGAN2-ADA trained on corgis (+ colab notebook): https://github.com/seawee1/Did-Somebody-Say-Corgi 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nvidia #stylegan2
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjona Ifahir. Today, we are going to explore a paper that improves on the incredible StarGam 2. What is that? StarGam 2 is a neural network-based learning algorithm that is not only capable of creating these eye-poppingly detailed images of human beings that don't even exist, but it also improved on its previous version in a number of different ways. For instance, with the original StarGam method, we could exert some artistic control over these images. However, look, you see how this part of the teeth and eyes are pinned to a particular location and the algorithm just refuses to let it go, sometimes to the detriment of its surroundings. The improved StarGam 2 method addressed this problem, you can see the results here. Teeth and eyes are now allowed to float around freely and perhaps this is the only place on the internet where we can say that and be happy about it. It could also make two images together and it could do it not only for human faces, but for cars, buildings, horses and more. And got this, this paper was published in December 2019 and since then, it has been used in a number of absolutely incredible applications and follow-up works. Let's look at three of them. One, the first question I usually hear when I talk about an amazing paper like this is, okay, great, but when do I get to use this? And the answer is, right now, because it is implemented in Photoshop in a feature that is called Neural Filters. Two, artistic control over these images has improved so much that now we can pin down a few intuitive parameters and change them with minimal changes to other parts of the image. For instance, it could grow Elon Musk a majestic beard and Elon Musk was not the only person who got an algorithmic beard. I hope you know what's coming. Yes, I got one too. Let me know in the comments whose beard you liked better. Three, a nice follow-up paper that could take a photo of Abraham Lincoln and other historic figures and restore their images as if we were time travelers and took these photos with a more modern camera. The best part here was that it leveraged the superb morphing capabilities of Stargain II and took an image of their siblings, a person who has somewhat similar proportions to the target subject and morphed them into a modern image of this historic figure. This was brilliant because restoring images is hard, but with Stargain II morphing is now easy so the authors decided to trade a difficult problem for an easier one. And the results speak for themselves. Of course, we cannot know for sure if this is what these historic figures really looked like, but for now it makes one heck of a thought experiment. And now let's marvel at these beautiful results with the new method that goes by the name Stargain II Aida. While we look through these results, all of which were generated with the new method, here are three things that it does better. One, it often works just as well as Stargain II, but requires 10 times fewer images for training. This means that now it can create these beautiful images and this can be done by training a set of neural networks from less than 10,000 images at a time. Whoa! That is not much at all. Two, it creates better quality results. The baseline here is the original Stargain II, the numbers are subject to minimization and are a measure of the quality of these images. As you see from the bolded numbers, the new method not only beats the baseline method substantially, but it does it across the board. That is a rare sight indeed. And three, we can train this method faster, it generates these images faster and in the meantime also consumes less memory, which is usually in short supply on our graphics cards. Now we noted that the new version of the method is called Stargain II Aida. What is Aida? This part means adaptive discriminator augmentation. What does that mean exactly? This means that the new method endeavors to squeeze as much information out of these training datasets as it can. Data augmentation is not new, it has been done for many years now, and essentially this means that we rotate, colorize, or even corrupt these images during the training process. The key here is that with this, we are artificially increasing the number of training samples the neural network sees. The difference here is that they used a greater set of augmentation, and the adaptive part means that these augmentation are tailored more to the dataset at hand. And now comes the best part, hold on to your papers and let's look at the timeline here. Stargain II appeared in December 2019, and Stargain II Aida, this method, came out just half a year later. Such immense progress in just six months of time. The pace of progress in machine learning research is absolutely stunning these days. Imagine what we will be able to do with these techniques just a couple more years down the line. What a time to be alive. But this paper also teaches a very important lesson to us that I would like to show you. Have a look at this table that shows the energy expenditure for this project for transparency, but it also tells us the number of experiments that were required to finish such an amazing paper. And that is more than 3,300 experiments. 255 of which were wasted due to technical problems. In the forward of my PhD thesis, I wrote the following quote, research is the study of failure. More precisely, research is the study of obtaining new knowledge through failure. A bad researcher fears 100% of the time, while a good one only fears 99% of the time. Hence what you see written here, and in most papers is only 1% of the work that has been done. I would like to thank Felizia, my wife, for providing motivation, shielding me from distractions, and bringing sunshine to my life to endure through many of these failures. This paper is a great testament to show how difficult the life of a researcher is. How many people give up their dreams after being rejected once, or maybe two times, 10 times? Most people give up after 10 tries, and just imagine having a thousand failed experiments and still not even being close to publishing a paper yet. And with a little more effort, this amazing work came out of it. Failing doesn't mean losing, not in the slightest. Huge congratulations to the authors for their endurance and for this amazing work, and I think this also shows that these weights and biases experiment-tracking tools are invaluable, because it is next to impossible to remember what went wrong with each of them, and what should be fixed. What you see here is a report of this exact paper we have talked about, which was made by weights and biases. I put a link to it in the description. Make sure to have a look, I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com, slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjona Ifahir."}, {"start": 4.72, "end": 10.72, "text": " Today, we are going to explore a paper that improves on the incredible StarGam 2."}, {"start": 11.44, "end": 18.080000000000002, "text": " What is that? StarGam 2 is a neural network-based learning algorithm that is not only capable"}, {"start": 18.080000000000002, "end": 23.6, "text": " of creating these eye-poppingly detailed images of human beings that don't even exist,"}, {"start": 23.6, "end": 28.72, "text": " but it also improved on its previous version in a number of different ways."}, {"start": 28.72, "end": 34.96, "text": " For instance, with the original StarGam method, we could exert some artistic control over these images."}, {"start": 35.6, "end": 43.12, "text": " However, look, you see how this part of the teeth and eyes are pinned to a particular location"}, {"start": 43.12, "end": 48.56, "text": " and the algorithm just refuses to let it go, sometimes to the detriment of its surroundings."}, {"start": 49.2, "end": 54.239999999999995, "text": " The improved StarGam 2 method addressed this problem, you can see the results here."}, {"start": 54.24, "end": 60.96, "text": " Teeth and eyes are now allowed to float around freely and perhaps this is the only place on the"}, {"start": 60.96, "end": 67.44, "text": " internet where we can say that and be happy about it. It could also make two images together"}, {"start": 67.44, "end": 74.64, "text": " and it could do it not only for human faces, but for cars, buildings, horses and more."}, {"start": 74.64, "end": 81.84, "text": " And got this, this paper was published in December 2019 and since then, it has been used in a number"}, {"start": 81.84, "end": 88.16, "text": " of absolutely incredible applications and follow-up works. Let's look at three of them."}, {"start": 88.96000000000001, "end": 94.64, "text": " One, the first question I usually hear when I talk about an amazing paper like this is,"}, {"start": 94.64, "end": 100.88, "text": " okay, great, but when do I get to use this? And the answer is, right now, because it is"}, {"start": 100.88, "end": 108.16, "text": " implemented in Photoshop in a feature that is called Neural Filters. Two, artistic control over"}, {"start": 108.16, "end": 115.36, "text": " these images has improved so much that now we can pin down a few intuitive parameters and change"}, {"start": 115.36, "end": 121.28, "text": " them with minimal changes to other parts of the image. For instance, it could grow Elon Musk"}, {"start": 121.28, "end": 128.07999999999998, "text": " a majestic beard and Elon Musk was not the only person who got an algorithmic beard. I hope you"}, {"start": 128.07999999999998, "end": 134.24, "text": " know what's coming. Yes, I got one too. Let me know in the comments whose beard you liked better."}, {"start": 134.24, "end": 142.08, "text": " Three, a nice follow-up paper that could take a photo of Abraham Lincoln and other historic figures"}, {"start": 142.08, "end": 148.64000000000001, "text": " and restore their images as if we were time travelers and took these photos with a more modern camera."}, {"start": 149.36, "end": 155.52, "text": " The best part here was that it leveraged the superb morphing capabilities of Stargain II and"}, {"start": 155.52, "end": 162.16000000000003, "text": " took an image of their siblings, a person who has somewhat similar proportions to the target subject"}, {"start": 162.16, "end": 169.76, "text": " and morphed them into a modern image of this historic figure. This was brilliant because restoring"}, {"start": 169.76, "end": 177.04, "text": " images is hard, but with Stargain II morphing is now easy so the authors decided to trade a"}, {"start": 177.04, "end": 184.16, "text": " difficult problem for an easier one. And the results speak for themselves. Of course, we cannot know"}, {"start": 184.16, "end": 190.4, "text": " for sure if this is what these historic figures really looked like, but for now it makes one heck"}, {"start": 190.4, "end": 196.48000000000002, "text": " of a thought experiment. And now let's marvel at these beautiful results with the new method"}, {"start": 196.48000000000002, "end": 202.88, "text": " that goes by the name Stargain II Aida. While we look through these results, all of which were"}, {"start": 202.88, "end": 210.16, "text": " generated with the new method, here are three things that it does better. One, it often works just"}, {"start": 210.16, "end": 218.08, "text": " as well as Stargain II, but requires 10 times fewer images for training. This means that now it can"}, {"start": 218.08, "end": 224.4, "text": " create these beautiful images and this can be done by training a set of neural networks from less"}, {"start": 224.4, "end": 233.60000000000002, "text": " than 10,000 images at a time. Whoa! That is not much at all. Two, it creates better quality results."}, {"start": 234.16000000000003, "end": 239.84, "text": " The baseline here is the original Stargain II, the numbers are subject to minimization"}, {"start": 239.84, "end": 245.28, "text": " and are a measure of the quality of these images. As you see from the bolded numbers,"}, {"start": 245.28, "end": 251.68, "text": " the new method not only beats the baseline method substantially, but it does it across the board."}, {"start": 252.4, "end": 260.24, "text": " That is a rare sight indeed. And three, we can train this method faster, it generates these images"}, {"start": 260.24, "end": 266.56, "text": " faster and in the meantime also consumes less memory, which is usually in short supply on our"}, {"start": 266.56, "end": 272.88, "text": " graphics cards. Now we noted that the new version of the method is called Stargain II Aida."}, {"start": 272.88, "end": 280.96, "text": " What is Aida? This part means adaptive discriminator augmentation. What does that mean exactly?"}, {"start": 280.96, "end": 286.96, "text": " This means that the new method endeavors to squeeze as much information out of these training"}, {"start": 286.96, "end": 294.32, "text": " datasets as it can. Data augmentation is not new, it has been done for many years now, and essentially"}, {"start": 294.32, "end": 300.48, "text": " this means that we rotate, colorize, or even corrupt these images during the training process."}, {"start": 300.48, "end": 306.8, "text": " The key here is that with this, we are artificially increasing the number of training samples"}, {"start": 306.8, "end": 312.32, "text": " the neural network sees. The difference here is that they used a greater set of augmentation,"}, {"start": 312.32, "end": 318.56, "text": " and the adaptive part means that these augmentation are tailored more to the dataset at hand."}, {"start": 319.36, "end": 324.56, "text": " And now comes the best part, hold on to your papers and let's look at the timeline here."}, {"start": 324.56, "end": 334.0, "text": " Stargain II appeared in December 2019, and Stargain II Aida, this method, came out just half a year later."}, {"start": 334.88, "end": 341.12, "text": " Such immense progress in just six months of time. The pace of progress in machine learning research"}, {"start": 341.12, "end": 347.04, "text": " is absolutely stunning these days. Imagine what we will be able to do with these techniques"}, {"start": 347.04, "end": 354.16, "text": " just a couple more years down the line. What a time to be alive. But this paper also teaches"}, {"start": 354.16, "end": 360.32000000000005, "text": " a very important lesson to us that I would like to show you. Have a look at this table that shows"}, {"start": 360.32000000000005, "end": 367.76000000000005, "text": " the energy expenditure for this project for transparency, but it also tells us the number of"}, {"start": 367.76000000000005, "end": 375.84000000000003, "text": " experiments that were required to finish such an amazing paper. And that is more than 3,300"}, {"start": 375.84, "end": 384.15999999999997, "text": " experiments. 255 of which were wasted due to technical problems. In the forward of my PhD thesis,"}, {"start": 384.15999999999997, "end": 391.28, "text": " I wrote the following quote, research is the study of failure. More precisely, research is the"}, {"start": 391.28, "end": 397.91999999999996, "text": " study of obtaining new knowledge through failure. A bad researcher fears 100% of the time,"}, {"start": 397.91999999999996, "end": 405.52, "text": " while a good one only fears 99% of the time. Hence what you see written here, and in most papers"}, {"start": 405.52, "end": 412.0, "text": " is only 1% of the work that has been done. I would like to thank Felizia, my wife, for providing"}, {"start": 412.0, "end": 418.24, "text": " motivation, shielding me from distractions, and bringing sunshine to my life to endure through"}, {"start": 418.24, "end": 424.64, "text": " many of these failures. This paper is a great testament to show how difficult the life of a researcher"}, {"start": 424.64, "end": 431.03999999999996, "text": " is. How many people give up their dreams after being rejected once, or maybe two times,"}, {"start": 431.04, "end": 438.48, "text": " 10 times? Most people give up after 10 tries, and just imagine having a thousand failed experiments"}, {"start": 438.48, "end": 444.8, "text": " and still not even being close to publishing a paper yet. And with a little more effort,"}, {"start": 444.8, "end": 450.96000000000004, "text": " this amazing work came out of it. Failing doesn't mean losing, not in the slightest."}, {"start": 451.6, "end": 457.28000000000003, "text": " Huge congratulations to the authors for their endurance and for this amazing work, and I think"}, {"start": 457.28, "end": 462.88, "text": " this also shows that these weights and biases experiment-tracking tools are invaluable,"}, {"start": 462.88, "end": 468.79999999999995, "text": " because it is next to impossible to remember what went wrong with each of them, and what should"}, {"start": 468.79999999999995, "end": 474.55999999999995, "text": " be fixed. What you see here is a report of this exact paper we have talked about, which was made"}, {"start": 474.55999999999995, "end": 479.91999999999996, "text": " by weights and biases. I put a link to it in the description. Make sure to have a look, I think"}, {"start": 479.91999999999996, "end": 485.84, "text": " it helps you understand this paper better. If you work with learning algorithms on a regular basis,"}, {"start": 485.84, "end": 491.03999999999996, "text": " make sure to check out weights and biases. Their system is designed to help you organize your"}, {"start": 491.03999999999996, "end": 497.03999999999996, "text": " experiments, and it is so good it could shave off weeks or even months of work from your projects,"}, {"start": 497.03999999999996, "end": 502.64, "text": " and is completely free for all individuals, academics, and open source projects."}, {"start": 502.64, "end": 508.71999999999997, "text": " This really is as good as it gets, and it is hardly a surprise that they are now used by over 200"}, {"start": 508.71999999999997, "end": 515.36, "text": " companies and research institutions. Make sure to visit them through wnb.com, slash papers,"}, {"start": 515.36, "end": 519.92, "text": " or just click the link in the video description, and you can get a free demo today."}, {"start": 519.92, "end": 525.12, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better"}, {"start": 525.12, "end": 546.08, "text": " videos for you. Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dVa1xRaHTA0
NVIDIA’s AI Puts Video Calls On Steroids! 💪
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/ayush-thakur/face-vid2vid/reports/Overview-of-One-Shot-Free-View-Neural-Talking-Head-Synthesis-for-Video-Conferencing--Vmlldzo1MzU4ODc 📝 The paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing" is available here: https://nvlabs.github.io/face-vid2vid/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-820315/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Neifahir. This paper is really something else. Scientists at Nvidia came up with an absolutely insane idea for video conferencing. Their idea is not to do what everyone else is doing, which is transmitting our video to the person on the other end. No, no, of course not. That would be too easy. What they do in this work is take only the first image from this video and they throw away the entire video afterwards. But before that, it stores a tiny bit of information from it, which is how our head is moving over time and how our expressions change. That is an absolutely outrageous idea, and of course we like those around here, so does this work. Well, let's have a look. This is the input video, note that this is not transmitted only the first image and some additional information and the rest of this video is discarded. And hold on to your papers because this is the output of the algorithm compared to the input video. No, this is not some kind of misunderstanding. Nobody has copypasted the results there. This is a near-perfect reconstruction of the input, except that the amount of information we need to transmit through the network is significantly less than with previous compression techniques. How much less? Well, you know what's coming, so let's try it out. Here is the output of the new technique and here is the comparison against H.264, a powerful and commonly used video compression standard. Well, to our disappointment, the two seem close. The new technique appears better, especially around the glasses, but the rest is similar. And if you have been holding on to your paper so far, now squeeze that paper because this is not a reasonable comparison, and that is because the previous method was allowed to transmit 6 to 12 times more information. Look, as we further decrease the data allowance of the previous method, it can still transmit more than twice as much information, and at this point there is no contest. This bitrate would be unusable for any kind of video conferencing, while the new method uses less than half as much information, and still transmits a sharp and perfectly fine video. Overall, the authors report that their new method is 10 times more efficient. That is unreal. This is an excellent video reconstruction technique that much is clear. And if it only did that, it would be a great paper. But this is not a great paper. This is an absolutely amazing paper, so it does even more. Much, much more. For instance, it can also rotate our head and make a frontal video, can also fix potential framing issues by translating our head and transferring all of our gestures to a new model. And it is also evaluated well, so all of these features are tested in isolation. So, look at these two previous methods trying to frontalize the input video. One would think that it's not even possible to perform properly, given how much these techniques are struggling with the task, until we look at the new method. My goodness. There is some jumpiness in the neck movement in the output video here, and some warping issues here, but otherwise very impressive results. Now, if you have been holding onto your paper so far, squeeze that paper because these previous methods are not some ancient papers that were published a long time ago. Not at all. Both of them were published within the same year as the new paper. How amazing is that? Wow. I really liked this page from the paper, which showcases both the images and the mathematical measurements against previous methods side by side. There are many ways to measure how close two videos are to each other. The up and down arrows tell us whether the given quality metric is subject to minimization or maximization. For instance, pixel-wise errors are typically minimized, so lesser is better, but we are to maximize the peak signal to noise ratio. And the cool thing is that none of this matters too much as soon as we insert the new technique, which really outpaces all of these. And we are still not done yet. So, we said that the technique takes the first image, reads the evolution of expressions and the head pose from the input video, and then it discards the entirety of the video, save for the first image. The cool thing about this was that we could pretend to rotate the head pose information, and the result is that the head appears rotated in the output image. That was great. But what if we take the source image from someone and take this data, the driving key point sequence from someone else? Well, what we get is motion transfer. Look, we only need one image of the target person, and we can transfer all of our gestures to them in a way that is significantly better than most previous methods. Now, of course, not even this technique is perfect. It still struggles a great deal in the presence of occluder objects, but still, just the fact that this is possible feels like something straight out of a science fiction movie. What you see here is a report of this exact paper we have talked about, which was made by weights and biases. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories, and what I am looking for is not data, but inside. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weight and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajol Neifahir."}, {"start": 4.8, "end": 7.5600000000000005, "text": " This paper is really something else."}, {"start": 7.5600000000000005, "end": 13.96, "text": " Scientists at Nvidia came up with an absolutely insane idea for video conferencing."}, {"start": 13.96, "end": 21.56, "text": " Their idea is not to do what everyone else is doing, which is transmitting our video to the person on the other end."}, {"start": 21.56, "end": 23.52, "text": " No, no, of course not."}, {"start": 23.52, "end": 25.6, "text": " That would be too easy."}, {"start": 25.6, "end": 34.24, "text": " What they do in this work is take only the first image from this video and they throw away the entire video afterwards."}, {"start": 34.24, "end": 44.92, "text": " But before that, it stores a tiny bit of information from it, which is how our head is moving over time and how our expressions change."}, {"start": 44.92, "end": 52.84, "text": " That is an absolutely outrageous idea, and of course we like those around here, so does this work."}, {"start": 52.84, "end": 54.84, "text": " Well, let's have a look."}, {"start": 54.84, "end": 65.4, "text": " This is the input video, note that this is not transmitted only the first image and some additional information and the rest of this video is discarded."}, {"start": 65.4, "end": 72.4, "text": " And hold on to your papers because this is the output of the algorithm compared to the input video."}, {"start": 72.4, "end": 75.56, "text": " No, this is not some kind of misunderstanding."}, {"start": 75.56, "end": 78.60000000000001, "text": " Nobody has copypasted the results there."}, {"start": 78.6, "end": 90.88, "text": " This is a near-perfect reconstruction of the input, except that the amount of information we need to transmit through the network is significantly less than with previous compression techniques."}, {"start": 90.88, "end": 92.24, "text": " How much less?"}, {"start": 92.24, "end": 95.63999999999999, "text": " Well, you know what's coming, so let's try it out."}, {"start": 95.63999999999999, "end": 106.16, "text": " Here is the output of the new technique and here is the comparison against H.264, a powerful and commonly used video compression standard."}, {"start": 106.16, "end": 110.0, "text": " Well, to our disappointment, the two seem close."}, {"start": 110.0, "end": 116.72, "text": " The new technique appears better, especially around the glasses, but the rest is similar."}, {"start": 116.72, "end": 125.03999999999999, "text": " And if you have been holding on to your paper so far, now squeeze that paper because this is not a reasonable comparison,"}, {"start": 125.03999999999999, "end": 132.32, "text": " and that is because the previous method was allowed to transmit 6 to 12 times more information."}, {"start": 132.32, "end": 140.23999999999998, "text": " Look, as we further decrease the data allowance of the previous method, it can still transmit more than twice as much information,"}, {"start": 140.23999999999998, "end": 143.2, "text": " and at this point there is no contest."}, {"start": 143.2, "end": 151.68, "text": " This bitrate would be unusable for any kind of video conferencing, while the new method uses less than half as much information,"}, {"start": 151.68, "end": 156.24, "text": " and still transmits a sharp and perfectly fine video."}, {"start": 156.24, "end": 161.68, "text": " Overall, the authors report that their new method is 10 times more efficient."}, {"start": 161.68, "end": 163.52, "text": " That is unreal."}, {"start": 163.52, "end": 168.4, "text": " This is an excellent video reconstruction technique that much is clear."}, {"start": 168.4, "end": 172.4, "text": " And if it only did that, it would be a great paper."}, {"start": 172.4, "end": 174.64000000000001, "text": " But this is not a great paper."}, {"start": 174.64000000000001, "end": 180.08, "text": " This is an absolutely amazing paper, so it does even more."}, {"start": 180.08, "end": 182.0, "text": " Much, much more."}, {"start": 182.0, "end": 186.56, "text": " For instance, it can also rotate our head and make a frontal video,"}, {"start": 186.56, "end": 194.88, "text": " can also fix potential framing issues by translating our head and transferring all of our gestures to a new model."}, {"start": 194.88, "end": 202.0, "text": " And it is also evaluated well, so all of these features are tested in isolation."}, {"start": 202.0, "end": 206.88, "text": " So, look at these two previous methods trying to frontalize the input video."}, {"start": 206.88, "end": 211.04, "text": " One would think that it's not even possible to perform properly,"}, {"start": 211.04, "end": 214.64000000000001, "text": " given how much these techniques are struggling with the task,"}, {"start": 214.64, "end": 217.83999999999997, "text": " until we look at the new method."}, {"start": 217.83999999999997, "end": 219.44, "text": " My goodness."}, {"start": 219.44, "end": 225.35999999999999, "text": " There is some jumpiness in the neck movement in the output video here,"}, {"start": 225.35999999999999, "end": 230.79999999999998, "text": " and some warping issues here, but otherwise very impressive results."}, {"start": 230.79999999999998, "end": 235.27999999999997, "text": " Now, if you have been holding onto your paper so far, squeeze that paper"}, {"start": 235.27999999999997, "end": 241.27999999999997, "text": " because these previous methods are not some ancient papers that were published a long time ago."}, {"start": 241.28, "end": 246.64000000000001, "text": " Not at all. Both of them were published within the same year as the new paper."}, {"start": 247.44, "end": 250.0, "text": " How amazing is that? Wow."}, {"start": 250.64, "end": 256.96, "text": " I really liked this page from the paper, which showcases both the images and the mathematical"}, {"start": 256.96, "end": 262.16, "text": " measurements against previous methods side by side. There are many ways to measure"}, {"start": 262.16, "end": 267.6, "text": " how close two videos are to each other. The up and down arrows tell us whether the given"}, {"start": 267.6, "end": 274.08000000000004, "text": " quality metric is subject to minimization or maximization. For instance, pixel-wise errors are"}, {"start": 274.08000000000004, "end": 281.52000000000004, "text": " typically minimized, so lesser is better, but we are to maximize the peak signal to noise ratio."}, {"start": 281.52000000000004, "end": 287.28000000000003, "text": " And the cool thing is that none of this matters too much as soon as we insert the new technique,"}, {"start": 287.28000000000003, "end": 292.64000000000004, "text": " which really outpaces all of these. And we are still not done yet."}, {"start": 292.64, "end": 298.64, "text": " So, we said that the technique takes the first image, reads the evolution of expressions"}, {"start": 298.64, "end": 304.88, "text": " and the head pose from the input video, and then it discards the entirety of the video,"}, {"start": 304.88, "end": 310.8, "text": " save for the first image. The cool thing about this was that we could pretend to rotate"}, {"start": 310.8, "end": 316.8, "text": " the head pose information, and the result is that the head appears rotated in the output image."}, {"start": 316.8, "end": 324.24, "text": " That was great. But what if we take the source image from someone and take this data,"}, {"start": 324.24, "end": 330.96000000000004, "text": " the driving key point sequence from someone else? Well, what we get is motion transfer."}, {"start": 333.28000000000003, "end": 340.0, "text": " Look, we only need one image of the target person, and we can transfer all of our gestures to them"}, {"start": 340.0, "end": 350.24, "text": " in a way that is significantly better than most previous methods."}, {"start": 350.56, "end": 356.4, "text": " Now, of course, not even this technique is perfect. It still struggles a great deal in the presence"}, {"start": 356.4, "end": 362.4, "text": " of occluder objects, but still, just the fact that this is possible feels like something"}, {"start": 362.4, "end": 368.24, "text": " straight out of a science fiction movie. What you see here is a report of this exact paper we"}, {"start": 368.24, "end": 373.44, "text": " have talked about, which was made by weights and biases. I put a link to it in the description,"}, {"start": 373.44, "end": 379.44, "text": " make sure to have a look, I think it helps you understand this paper better. During my PhD studies,"}, {"start": 379.44, "end": 384.96000000000004, "text": " I trained a ton of neural networks which were used in our experiments. However, over time,"}, {"start": 384.96000000000004, "end": 391.36, "text": " there was just too much data in our repositories, and what I am looking for is not data, but inside."}, {"start": 391.84000000000003, "end": 396.72, "text": " And that's exactly how weights and biases helps you by organizing your experiments."}, {"start": 396.72, "end": 402.08000000000004, "text": " It is used by more than 200 companies and research institutions, including OpenAI,"}, {"start": 402.08000000000004, "end": 409.12, "text": " Toyota Research, GitHub, and more. And get this, weight and biases is free for all individuals,"}, {"start": 409.12, "end": 416.0, "text": " academics, and open source projects. Make sure to visit them through wnba.com slash papers,"}, {"start": 416.0, "end": 420.96000000000004, "text": " or just click the link in the video description, and you can get a free demo today."}, {"start": 420.96, "end": 426.96, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you."}, {"start": 426.96, "end": 456.79999999999995, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=4etSuEQOzDw
All Hail The Adaptive Staggered Grid! 🌐🤯
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "An adaptive staggered-tilted grid for incompressible flow simulation" is available here: https://cs.nyu.edu/~sw4429/files/sa20-fluid.pdf https://dl.acm.org/doi/abs/10.1145/3414685.3417837 ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fahir. Today, we are going to conquer some absolutely insane fluid and smoke simulations. A common property of these simulation programs is that they subdivide the simulation domain into a grid and they compute important quantities like velocity and pressure in these grid points. Normally, regular grid looks something like this, but this crazy new technique throws away the idea of using this as a grid and uses this instead. This is called an adaptive staggered tilted grid, an AST grid in short. So, what does that really mean? The tilted part means that cells can be rotated by 45 degrees like this and interestingly, they typically appear only where needed, I'll show you in a moment. The adaptive part means that the size of the grid cells is not fixed and can be all over the place. And even better, this concept can be easily generalized to 3D grids as well. Now, when I first read this paper, two things came to my mind, one that is an insane idea, I kinda like it, and two, it cannot possibly work. It turns out only one of these is true. And I was also wondering why? Why do all this? And the answer is because this way we get better fluid and smoke simulations. Oh yeah, let's demonstrate it through 4 beautiful experiments. Experiment number 1, Carmen Vortex Streets. We noted that the tilted grid points only appear where they are needed and these are places where there is a great deal of vorticity. Let's test that. This phenomenon showcases repeated vortex patterns and the algorithm is hard at work here. How do we know? Well, of course, we don't know that yet. So, let's look under the hood together and see what is going on. Oh wow, look at that. The algorithm knows where the vorticity is and as a result, these tilted cells are flowing through the simulation beautifully. Experiment number 2, Smoke Plumes and Porousnets. This technique refines the grids with these tilted grid cells in areas where there is a great deal of turbulence. And wait a second, what is this? The net is also covered with tilted cells. Why is that? The reason for this is that the tilted cells not only cover turbulent regions, but other regions of interest as well. In this case, it enables us to capture this narrow flow around the obstacle. Without this no AST grid, some of these smoke plumes wouldn't make it through the net. Experiment number 3, the boat ride. Note that the surface of the pool is completely covered with the new tilted cells, making sure that the wake of the boat is as detailed as it can possibly be. But, in the meantime, the algorithm is not wasteful. Look, the volume itself is free of them. And now, hold onto your papers for experiment number 4, thin water sheets. You can see the final simulation here, and if we look under the hood, my goodness, just look at how much work this algorithm is doing. And what is even better, it only does so, where it is really needed, it doesn't do any extra work in these regions. I am so far very impressed with this technique. We saw that it does a ton of work for us, and increases the detail in our simulations, and helps things flow through, where they should really flow through. Now, with that said, there is only one question left. What does this cost us? How much more expensive is this ASD grid simulation than a regular grid? Plus 100% computation time? Plus 50%? How much is it worth to you? Please stop the video and leave a comment with your guess. I'll wait. Thank you. The answer is none of those. It costs almost nothing and adds typically an additional 1% of computation time. And in return for that almost nothing, we get all of these beautiful fluid and smoke simulations. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karo Zsolnai-Fahir."}, {"start": 4.64, "end": 10.8, "text": " Today, we are going to conquer some absolutely insane fluid and smoke simulations."}, {"start": 10.8, "end": 17.44, "text": " A common property of these simulation programs is that they subdivide the simulation domain into a grid"}, {"start": 17.44, "end": 23.6, "text": " and they compute important quantities like velocity and pressure in these grid points."}, {"start": 23.6, "end": 35.44, "text": " Normally, regular grid looks something like this, but this crazy new technique throws away the idea of using this as a grid and uses this instead."}, {"start": 35.44, "end": 41.120000000000005, "text": " This is called an adaptive staggered tilted grid, an AST grid in short."}, {"start": 41.120000000000005, "end": 43.84, "text": " So, what does that really mean?"}, {"start": 43.84, "end": 55.040000000000006, "text": " The tilted part means that cells can be rotated by 45 degrees like this and interestingly, they typically appear only where needed, I'll show you in a moment."}, {"start": 55.040000000000006, "end": 61.84, "text": " The adaptive part means that the size of the grid cells is not fixed and can be all over the place."}, {"start": 61.84, "end": 67.68, "text": " And even better, this concept can be easily generalized to 3D grids as well."}, {"start": 67.68, "end": 79.76, "text": " Now, when I first read this paper, two things came to my mind, one that is an insane idea, I kinda like it, and two, it cannot possibly work."}, {"start": 79.76, "end": 83.12, "text": " It turns out only one of these is true."}, {"start": 83.12, "end": 85.92000000000002, "text": " And I was also wondering why?"}, {"start": 85.92000000000002, "end": 88.0, "text": " Why do all this?"}, {"start": 88.0, "end": 93.12, "text": " And the answer is because this way we get better fluid and smoke simulations."}, {"start": 93.12, "end": 98.08, "text": " Oh yeah, let's demonstrate it through 4 beautiful experiments."}, {"start": 98.08, "end": 102.08000000000001, "text": " Experiment number 1, Carmen Vortex Streets."}, {"start": 102.08000000000001, "end": 110.48, "text": " We noted that the tilted grid points only appear where they are needed and these are places where there is a great deal of vorticity."}, {"start": 110.48, "end": 111.84, "text": " Let's test that."}, {"start": 111.84, "end": 118.16, "text": " This phenomenon showcases repeated vortex patterns and the algorithm is hard at work here."}, {"start": 118.16, "end": 119.60000000000001, "text": " How do we know?"}, {"start": 119.6, "end": 123.91999999999999, "text": " Well, of course, we don't know that yet."}, {"start": 123.91999999999999, "end": 128.88, "text": " So, let's look under the hood together and see what is going on."}, {"start": 128.88, "end": 131.51999999999998, "text": " Oh wow, look at that."}, {"start": 131.51999999999998, "end": 140.4, "text": " The algorithm knows where the vorticity is and as a result, these tilted cells are flowing through the simulation beautifully."}, {"start": 143.35999999999999, "end": 148.4, "text": " Experiment number 2, Smoke Plumes and Porousnets."}, {"start": 148.4, "end": 155.28, "text": " This technique refines the grids with these tilted grid cells in areas where there is a great deal of turbulence."}, {"start": 155.28, "end": 159.36, "text": " And wait a second, what is this?"}, {"start": 159.36, "end": 163.12, "text": " The net is also covered with tilted cells."}, {"start": 163.12, "end": 164.64000000000001, "text": " Why is that?"}, {"start": 164.64000000000001, "end": 173.20000000000002, "text": " The reason for this is that the tilted cells not only cover turbulent regions, but other regions of interest as well."}, {"start": 173.2, "end": 178.72, "text": " In this case, it enables us to capture this narrow flow around the obstacle."}, {"start": 178.72, "end": 183.11999999999998, "text": " Without this no AST grid, some of these smoke plumes wouldn't make it through the net."}, {"start": 184.0, "end": 186.88, "text": " Experiment number 3, the boat ride."}, {"start": 187.6, "end": 192.56, "text": " Note that the surface of the pool is completely covered with the new tilted cells,"}, {"start": 192.56, "end": 197.2, "text": " making sure that the wake of the boat is as detailed as it can possibly be."}, {"start": 198.16, "end": 201.83999999999997, "text": " But, in the meantime, the algorithm is not wasteful."}, {"start": 201.84, "end": 204.72, "text": " Look, the volume itself is free of them."}, {"start": 206.48, "end": 212.56, "text": " And now, hold onto your papers for experiment number 4, thin water sheets."}, {"start": 212.56, "end": 214.64000000000001, "text": " You can see the final simulation here,"}, {"start": 216.48000000000002, "end": 218.4, "text": " and if we look under the hood,"}, {"start": 221.12, "end": 225.52, "text": " my goodness, just look at how much work this algorithm is doing."}, {"start": 225.52, "end": 233.12, "text": " And what is even better, it only does so, where it is really needed, it doesn't do any extra work in these regions."}, {"start": 234.0, "end": 237.04000000000002, "text": " I am so far very impressed with this technique."}, {"start": 237.52, "end": 242.88, "text": " We saw that it does a ton of work for us, and increases the detail in our simulations,"}, {"start": 242.88, "end": 246.88, "text": " and helps things flow through, where they should really flow through."}, {"start": 247.76000000000002, "end": 251.36, "text": " Now, with that said, there is only one question left."}, {"start": 251.92000000000002, "end": 253.04000000000002, "text": " What does this cost us?"}, {"start": 253.04, "end": 258.48, "text": " How much more expensive is this ASD grid simulation than a regular grid?"}, {"start": 258.48, "end": 261.52, "text": " Plus 100% computation time?"}, {"start": 261.52, "end": 263.44, "text": " Plus 50%?"}, {"start": 263.44, "end": 265.36, "text": " How much is it worth to you?"}, {"start": 265.36, "end": 268.88, "text": " Please stop the video and leave a comment with your guess."}, {"start": 268.88, "end": 269.92, "text": " I'll wait."}, {"start": 269.92, "end": 271.44, "text": " Thank you."}, {"start": 271.44, "end": 273.59999999999997, "text": " The answer is none of those."}, {"start": 273.59999999999997, "end": 280.15999999999997, "text": " It costs almost nothing and adds typically an additional 1% of computation time."}, {"start": 280.16, "end": 286.72, "text": " And in return for that almost nothing, we get all of these beautiful fluid and smoke simulations."}, {"start": 286.72, "end": 288.8, "text": " What a time to be alive."}, {"start": 288.8, "end": 292.8, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 292.8, "end": 298.8, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 298.8, "end": 305.28000000000003, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances."}, {"start": 305.28, "end": 313.2, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 313.2, "end": 318.55999999999995, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 318.55999999999995, "end": 323.11999999999995, "text": " Join researchers at organizations like Apple, MIT, and Caltech"}, {"start": 323.11999999999995, "end": 326.88, "text": " in using Lambda Cloud instances, workstations, or servers."}, {"start": 326.88, "end": 333.67999999999995, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 333.68, "end": 339.28000000000003, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos for you."}, {"start": 339.28, "end": 369.11999999999995, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=B8RMUSmIGCI
3 New Things An AI Can Do With Your Photos!
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/mathisfederico/wandb_features/reports/Visualizing-Confusion-Matrices-With-W-B--VmlldzoxMzE5ODk 📝 The paper "GANSpace: Discovering Interpretable GAN Controls" is available here: https://github.com/harskish/ganspace 📝 Our material synthesis paper is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 📝 The font manifold paper is available here: http://vecg.cs.ucl.ac.uk/Projects/projects_fonts/projects_fonts.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Thumbnail background image credit: https://pixabay.com/images/id-5330343/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Here you see people that don't exist. How can that be? Well, they don't exist because these images were created with a neural network-based learning method by the name StarGam2, which can not only create eye-poppingly detailed looking images, but it can also fuse these people together, or generate cars, churches, and of course, cats. An even cooler thing is that many of these techniques allow us to exert artistic control over these images. So, how does that happen? How do we control a neural network? It happens through exploring latent spaces. And what is that? A latent space is a made-up place where we are trying to organize data in a way that similar things are close to each other. What you see here is a 2D latent space for generating different fonts. It is hard to explain why these fonts are similar, but most of us would agree that they indeed share some common properties. The cool thing here is that we can explore this latent space with our cursor and generate all kinds of new fonts. You can try this work in your browser, the link is available in the video description. And luckily, we can build a latent space not only for fonts, but for nearly anything. I am a light transport researcher by trade, so in this earlier paper we were interested in generating hundreds of variants of a material model to populate this scene. In this latent space, we can conquer all of these really cool digital material models. A link to this work is also available in the video description. So, let's recap. One of the cool things we can do with latent spaces is generate new images that are somewhat similar. But, there is a problem. As we go into nearly any direction, not just one thing, but many things about the image change. For instance, as we explore the space of fonts here, not just the width of the font changes, everything changes. Or, if we explore materials here, not just the shininess or the colors of the material change, everything changes. This is great to explore if we can do it in real time. If I change this parameter, not just the car shape changes, the foreground changes, the background changes, again, everything changes. So, these are nice and intuitive controls, but not interpretable controls. Can we get that somehow? The answer is yes, not everything must change. This previous technique is based on Stagen tool and is called Stylflow, and it can take an input photo of a test subject and edit a number of meaningful parameters. Age, expression, lighting, pose, you name it. For instance, it could also grow Elon Musk a majestic beard. And that's not all, because Elon Musk is not the only person who got a beard. Look, this is me here, after I got locked up for dropping my papers. And I spent so long in here that I grew a beard. Or I mean this neural network gave me one. And since the punishment for dropping your papers is not short, in fact it is quite long, this happened. Ouch! I hereby promise to never drop my papers ever again. You will also have to hold on to yours too, so stay alert. So, apparently interpretable controls already exist. And I wonder how far can we push this concept? Beard or no beard is great, but what about cars? What about paintings? Well, this new technique found a way to navigate these latent spaces and introduces three amazing new examples of interpretable controls that I haven't seen anywhere else yet. One, it can change the car geometry. We can change the sportiness of the car and even ask the design to be more or less boxy. Note that there is some additional damage here, but we can counteract that by changing the foreground to our taste, for instance, add some grass in there. Two, it can repaint paintings. We can change the roughness of the grass strokes, simplify the style, or even rotate the model. This way we can create or adjust without having to even touch a paintbrush. Three, facial expressions. First, when I started reading this paper, I was a little suspicious. I have seen these controls before, so I looked at it like this. But, as I saw how well it did, I went more like this. And this paper can do way more. It can add lipstick, change the shape of the mouth or the eyes, and do all this with very little collateral damage to the remainder of the image. Loving it. It can also find and blur the background similarly to those amazing portrait mode photos that newer smartphones can do. And of course, it can also do the usual suspects, adjusting the age, hairstyle, or growing a beard. So with that, there we go. Now, with the power of mural network-based learning methods, we can create new car designs, can repaint paintings without ever touching a paintbrush, and give someone a shave. It truly feels like we are living in a science fiction world. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their tool to visualize confusion matrices and find out where your mural network made mistakes and what exactly those mistakes were. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today. Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 5.0, "end": 10.0, "text": " Here you see people that don't exist. How can that be?"}, {"start": 10.0, "end": 18.0, "text": " Well, they don't exist because these images were created with a neural network-based learning method by the name StarGam2,"}, {"start": 18.0, "end": 26.0, "text": " which can not only create eye-poppingly detailed looking images, but it can also fuse these people together,"}, {"start": 26.0, "end": 32.0, "text": " or generate cars, churches, and of course, cats."}, {"start": 32.0, "end": 39.0, "text": " An even cooler thing is that many of these techniques allow us to exert artistic control over these images."}, {"start": 39.0, "end": 44.0, "text": " So, how does that happen? How do we control a neural network?"}, {"start": 44.0, "end": 50.0, "text": " It happens through exploring latent spaces. And what is that?"}, {"start": 50.0, "end": 59.0, "text": " A latent space is a made-up place where we are trying to organize data in a way that similar things are close to each other."}, {"start": 59.0, "end": 64.0, "text": " What you see here is a 2D latent space for generating different fonts."}, {"start": 64.0, "end": 73.0, "text": " It is hard to explain why these fonts are similar, but most of us would agree that they indeed share some common properties."}, {"start": 73.0, "end": 80.0, "text": " The cool thing here is that we can explore this latent space with our cursor and generate all kinds of new fonts."}, {"start": 80.0, "end": 85.0, "text": " You can try this work in your browser, the link is available in the video description."}, {"start": 85.0, "end": 92.0, "text": " And luckily, we can build a latent space not only for fonts, but for nearly anything."}, {"start": 92.0, "end": 102.0, "text": " I am a light transport researcher by trade, so in this earlier paper we were interested in generating hundreds of variants of a material model to populate this scene."}, {"start": 102.0, "end": 108.0, "text": " In this latent space, we can conquer all of these really cool digital material models."}, {"start": 108.0, "end": 112.0, "text": " A link to this work is also available in the video description."}, {"start": 112.0, "end": 114.0, "text": " So, let's recap."}, {"start": 114.0, "end": 121.0, "text": " One of the cool things we can do with latent spaces is generate new images that are somewhat similar."}, {"start": 121.0, "end": 124.0, "text": " But, there is a problem."}, {"start": 124.0, "end": 132.0, "text": " As we go into nearly any direction, not just one thing, but many things about the image change."}, {"start": 132.0, "end": 140.0, "text": " For instance, as we explore the space of fonts here, not just the width of the font changes, everything changes."}, {"start": 140.0, "end": 148.0, "text": " Or, if we explore materials here, not just the shininess or the colors of the material change, everything changes."}, {"start": 148.0, "end": 152.0, "text": " This is great to explore if we can do it in real time."}, {"start": 152.0, "end": 161.0, "text": " If I change this parameter, not just the car shape changes, the foreground changes, the background changes, again, everything changes."}, {"start": 161.0, "end": 167.0, "text": " So, these are nice and intuitive controls, but not interpretable controls."}, {"start": 167.0, "end": 169.0, "text": " Can we get that somehow?"}, {"start": 169.0, "end": 173.0, "text": " The answer is yes, not everything must change."}, {"start": 173.0, "end": 184.0, "text": " This previous technique is based on Stagen tool and is called Stylflow, and it can take an input photo of a test subject and edit a number of meaningful parameters."}, {"start": 184.0, "end": 190.0, "text": " Age, expression, lighting, pose, you name it."}, {"start": 190.0, "end": 195.0, "text": " For instance, it could also grow Elon Musk a majestic beard."}, {"start": 195.0, "end": 205.0, "text": " And that's not all, because Elon Musk is not the only person who got a beard. Look, this is me here, after I got locked up for dropping my papers."}, {"start": 205.0, "end": 213.0, "text": " And I spent so long in here that I grew a beard. Or I mean this neural network gave me one."}, {"start": 213.0, "end": 221.0, "text": " And since the punishment for dropping your papers is not short, in fact it is quite long, this happened."}, {"start": 221.0, "end": 230.0, "text": " Ouch! I hereby promise to never drop my papers ever again. You will also have to hold on to yours too, so stay alert."}, {"start": 230.0, "end": 237.0, "text": " So, apparently interpretable controls already exist. And I wonder how far can we push this concept?"}, {"start": 237.0, "end": 243.0, "text": " Beard or no beard is great, but what about cars? What about paintings?"}, {"start": 243.0, "end": 255.0, "text": " Well, this new technique found a way to navigate these latent spaces and introduces three amazing new examples of interpretable controls that I haven't seen anywhere else yet."}, {"start": 255.0, "end": 268.0, "text": " One, it can change the car geometry. We can change the sportiness of the car and even ask the design to be more or less boxy."}, {"start": 268.0, "end": 280.0, "text": " Note that there is some additional damage here, but we can counteract that by changing the foreground to our taste, for instance, add some grass in there."}, {"start": 280.0, "end": 297.0, "text": " Two, it can repaint paintings. We can change the roughness of the grass strokes, simplify the style, or even rotate the model."}, {"start": 297.0, "end": 304.0, "text": " This way we can create or adjust without having to even touch a paintbrush."}, {"start": 304.0, "end": 316.0, "text": " Three, facial expressions. First, when I started reading this paper, I was a little suspicious. I have seen these controls before, so I looked at it like this."}, {"start": 316.0, "end": 322.0, "text": " But, as I saw how well it did, I went more like this."}, {"start": 322.0, "end": 335.0, "text": " And this paper can do way more. It can add lipstick, change the shape of the mouth or the eyes, and do all this with very little collateral damage to the remainder of the image."}, {"start": 335.0, "end": 345.0, "text": " Loving it. It can also find and blur the background similarly to those amazing portrait mode photos that newer smartphones can do."}, {"start": 345.0, "end": 354.0, "text": " And of course, it can also do the usual suspects, adjusting the age, hairstyle, or growing a beard."}, {"start": 354.0, "end": 369.0, "text": " So with that, there we go. Now, with the power of mural network-based learning methods, we can create new car designs, can repaint paintings without ever touching a paintbrush, and give someone a shave."}, {"start": 369.0, "end": 375.0, "text": " It truly feels like we are living in a science fiction world. What a time to be alive!"}, {"start": 375.0, "end": 389.0, "text": " This episode has been supported by weights and biases. In this post, they show you how to use their tool to visualize confusion matrices and find out where your mural network made mistakes and what exactly those mistakes were."}, {"start": 389.0, "end": 405.0, "text": " Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 405.0, "end": 414.0, "text": " And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets."}, {"start": 414.0, "end": 423.0, "text": " Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today."}, {"start": 423.0, "end": 429.0, "text": " Our thanks to weights and biases for their long-standing support, and for helping us make better videos for you."}, {"start": 429.0, "end": 453.0, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=lxzGraohijU
5 Crazy Simulations That Were Previously Impossible! ⛓
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/ayush-thakur/interpretability/reports/Interpretability-in-Deep-Learning-With-W-B-CAM-and-GradCAM--Vmlldzo5MTIyNw 📝 The paper "Incremental Potential Contact: Intersection- and Inversion-free Large Deformation Dynamics" is available here: https://ipc-sim.github.io/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajona Ifaher. If we study the laws of physics and program them into a computer, we can create a beautiful simulation that showcases the process of baking. And if we so desire, when we are done, we can even tear a loaf of bread apart. With this previous method, we can also smash Oreos, candy crabs, pumpkins, and much, much more. This jelly fracture scene is my long time favorite. And this new work asks a brazen question only a proper computer graphics researcher could ask, can we write an even more extreme simulation? I don't think so, but apparently this paper promises a technique that supports more extreme compression and deformation, and when they say that, they really mean it. And let's see what this can do through five super fun experiments. Experiment number one squishing. As you see, this paper aligns well with the favorite pastimes of a computer graphics researcher, which is, of course, destroying virtual objects in a spectacular fashion. First, we force these soft elastic virtual objects through a thin obstacle tube. Things get quite squishy here, out, and when they come out on the other side, their geometries can also separate properly. And watch how beautifully they regain their original shapes afterwards. Experiment number two, the tendril test. We grab a squishy ball and throw it at the wall, and here comes the cool part. This panel was made of glass, so we also get to view the whole interaction through it, and this way we can see all the squishing happening. Look, the tendrils are super detailed, and every single one remains intact, and intersection free, despite the intense compression. Outstanding. Experiment number three, the twisting test. We take a piece of mat and keep twisting and twisting, and still going. Note that the algorithm has to compute up to half a million contact events every time it advances the time a tiny bit, and still, no self-intersections, no anomalies. This is crazy. Some of our more seasoned fellow scholars will immediately ask, okay, great, but how real is all this? Is this just good enough to fool the entry in die, or does it really simulate what would happen in reality? Well, hold on to your papers, because here comes my favorite part in these simulation papers, and this is when we let reality be our judge, and try to reproduce real-world footage with a simulation. Experiment number four, the high-speed impact test. Here is the real footage of a foam practice ball fired at a plate. And now, at the point of impact, this part of the ball has stopped, but the other side is still flying with a high velocity. So, what will be the result? A ton of compression. So, what does this simulator say about this? My goodness, just look at that. This is really accurate, loving it. This sounds all great, but do we really need this technique? The answer shall be given by experiment number five, ghosts, and chains. What could that mean? Here, you see Houdini's volume, the industry standard simulation for a cloth, soft body, and the number of other kinds of simulations. It is an absolutely amazing tool, but wait a second. Look, artificial ghost forces appear even on a simple test case with 35 chain links. And I wonder if the new method can deal with these 35 chain links? The answer is a resounding yes, no ghost forces. And not only that, but it can deal with even longer chains let's try a hundred links. Oh yeah, now we're talking. And now, only one question remains. How much do we have to wait for all this? All this new technique asks for is a few seconds per frame for the simpler scenes and in the order of minutes per frame for the more crazy tests out there. Praise the papers. That is a fantastic deal. And what is even more fantastic, all this is performed on your processor. So, of course, if someone can implement it in a way that it runs on the graphics card, the next paper down the line will be much, much faster. This episode has been supported by weights and biases. In this post, they show you how to use their tool to interpret the results of your neural networks. For instance, they tell you how to look if your neural network has even looked at the dog in an image before classifying it to be a dog. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajona Ifaher."}, {"start": 4.64, "end": 8.8, "text": " If we study the laws of physics and program them into a computer,"}, {"start": 8.8, "end": 13.92, "text": " we can create a beautiful simulation that showcases the process of baking."}, {"start": 14.4, "end": 20.56, "text": " And if we so desire, when we are done, we can even tear a loaf of bread apart."}, {"start": 20.96, "end": 28.240000000000002, "text": " With this previous method, we can also smash Oreos, candy crabs, pumpkins, and much, much more."}, {"start": 28.24, "end": 31.279999999999998, "text": " This jelly fracture scene is my long time favorite."}, {"start": 32.08, "end": 38.879999999999995, "text": " And this new work asks a brazen question only a proper computer graphics researcher could ask,"}, {"start": 38.879999999999995, "end": 41.839999999999996, "text": " can we write an even more extreme simulation?"}, {"start": 42.56, "end": 47.68, "text": " I don't think so, but apparently this paper promises a technique that supports"}, {"start": 47.68, "end": 52.8, "text": " more extreme compression and deformation, and when they say that, they really mean it."}, {"start": 52.8, "end": 57.68, "text": " And let's see what this can do through five super fun experiments."}, {"start": 57.68, "end": 60.239999999999995, "text": " Experiment number one squishing."}, {"start": 60.959999999999994, "end": 66.96, "text": " As you see, this paper aligns well with the favorite pastimes of a computer graphics researcher,"}, {"start": 66.96, "end": 71.75999999999999, "text": " which is, of course, destroying virtual objects in a spectacular fashion."}, {"start": 72.64, "end": 77.67999999999999, "text": " First, we force these soft elastic virtual objects through a thin obstacle tube."}, {"start": 77.68, "end": 85.2, "text": " Things get quite squishy here, out, and when they come out on the other side, their geometries can"}, {"start": 85.2, "end": 91.84, "text": " also separate properly. And watch how beautifully they regain their original shapes afterwards."}, {"start": 92.48, "end": 99.36000000000001, "text": " Experiment number two, the tendril test. We grab a squishy ball and throw it at the wall,"}, {"start": 99.36000000000001, "end": 105.60000000000001, "text": " and here comes the cool part. This panel was made of glass, so we also get to view the whole"}, {"start": 105.6, "end": 112.24, "text": " interaction through it, and this way we can see all the squishing happening. Look, the tendrils"}, {"start": 112.24, "end": 119.11999999999999, "text": " are super detailed, and every single one remains intact, and intersection free, despite the intense"}, {"start": 119.11999999999999, "end": 126.96, "text": " compression. Outstanding. Experiment number three, the twisting test. We take a piece of mat"}, {"start": 126.96, "end": 134.79999999999998, "text": " and keep twisting and twisting, and still going. Note that the algorithm has to compute up to"}, {"start": 134.8, "end": 142.56, "text": " half a million contact events every time it advances the time a tiny bit, and still, no self-intersections,"}, {"start": 142.56, "end": 149.52, "text": " no anomalies. This is crazy. Some of our more seasoned fellow scholars will immediately ask,"}, {"start": 149.52, "end": 156.32000000000002, "text": " okay, great, but how real is all this? Is this just good enough to fool the entry in die,"}, {"start": 156.32000000000002, "end": 162.4, "text": " or does it really simulate what would happen in reality? Well, hold on to your papers,"}, {"start": 162.4, "end": 168.64000000000001, "text": " because here comes my favorite part in these simulation papers, and this is when we let reality"}, {"start": 168.64000000000001, "end": 176.48000000000002, "text": " be our judge, and try to reproduce real-world footage with a simulation. Experiment number four,"}, {"start": 176.48000000000002, "end": 182.8, "text": " the high-speed impact test. Here is the real footage of a foam practice ball fired at a plate."}, {"start": 183.52, "end": 190.16, "text": " And now, at the point of impact, this part of the ball has stopped, but the other side is still"}, {"start": 190.16, "end": 196.96, "text": " flying with a high velocity. So, what will be the result? A ton of compression."}, {"start": 198.88, "end": 201.04, "text": " So, what does this simulator say about this?"}, {"start": 205.51999999999998, "end": 213.6, "text": " My goodness, just look at that. This is really accurate, loving it. This sounds all great,"}, {"start": 213.6, "end": 220.0, "text": " but do we really need this technique? The answer shall be given by experiment number five,"}, {"start": 220.0, "end": 227.12, "text": " ghosts, and chains. What could that mean? Here, you see Houdini's volume, the industry standard"}, {"start": 227.12, "end": 233.92, "text": " simulation for a cloth, soft body, and the number of other kinds of simulations. It is an absolutely"}, {"start": 233.92, "end": 242.8, "text": " amazing tool, but wait a second. Look, artificial ghost forces appear even on a simple test case"}, {"start": 242.8, "end": 251.12, "text": " with 35 chain links. And I wonder if the new method can deal with these 35 chain links? The answer"}, {"start": 251.12, "end": 259.04, "text": " is a resounding yes, no ghost forces. And not only that, but it can deal with even longer chains"}, {"start": 259.04, "end": 268.8, "text": " let's try a hundred links. Oh yeah, now we're talking. And now, only one question remains."}, {"start": 268.8, "end": 276.0, "text": " How much do we have to wait for all this? All this new technique asks for is a few seconds per frame"}, {"start": 276.0, "end": 282.08000000000004, "text": " for the simpler scenes and in the order of minutes per frame for the more crazy tests out there."}, {"start": 282.72, "end": 289.52, "text": " Praise the papers. That is a fantastic deal. And what is even more fantastic, all this is"}, {"start": 289.52, "end": 295.36, "text": " performed on your processor. So, of course, if someone can implement it in a way that it runs on"}, {"start": 295.36, "end": 301.52000000000004, "text": " the graphics card, the next paper down the line will be much, much faster. This episode has been"}, {"start": 301.52000000000004, "end": 307.44, "text": " supported by weights and biases. In this post, they show you how to use their tool to interpret"}, {"start": 307.44, "end": 312.96000000000004, "text": " the results of your neural networks. For instance, they tell you how to look if your neural network"}, {"start": 312.96000000000004, "end": 318.56, "text": " has even looked at the dog in an image before classifying it to be a dog. If you work with"}, {"start": 318.56, "end": 324.24, "text": " learning algorithms on a regular basis, make sure to check out weights and biases. Their system"}, {"start": 324.24, "end": 330.32, "text": " is designed to help you organize your experiments and it is so good it could shave off weeks or even"}, {"start": 330.32, "end": 336.88, "text": " months of work from your projects and is completely free for all individuals, academics and open source"}, {"start": 336.88, "end": 342.72, "text": " projects. This really is as good as it gets and it is hardly a surprise that they are now used by"}, {"start": 342.72, "end": 350.40000000000003, "text": " over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers"}, {"start": 350.4, "end": 355.67999999999995, "text": " or just click the link in the video description and you can get a free demo today. Our thanks to"}, {"start": 355.67999999999995, "end": 361.2, "text": " weights and biases for their longstanding support and for helping us make better videos for you."}, {"start": 361.2, "end": 389.36, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=I04zRq6UlIg
This Magnetic Simulation Took Nearly A Month! 🧲
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "A Level-Set Method for Magnetic Substance Simulation" is available here: https://binwangbfa.github.io/publication/sig20_ferrofluid/SIG20_FerroFluid.pdf https://starryuniv.cn/ http://vcl.pku.edu.cn/publication/2020/magnetism https://starryuniv.cn/publication/a-level-set-method-for-magnetic-substance-simulation/ Some links may be down, trying to add several of them to make sure you find one that works! ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehid. Have a look at these beautiful, fair-fluid simulations from a previous paper. These are fluids that have magnetic properties and thus respond to an external magnetic field, and yes, you are seeing correctly they are able to even climb things. And the best part is that the simulator was so accurate that we could run it side by side with real life footage and see that they run very similarly. Excellent! Now, running these simulations took a considerable amount of time. To address this, a follow-up paper appeared that showcased a surface-only formulation. What does that mean? Well, a key observation here was that for a class of fair-fluids, we don't have to compute how the magnetic forces act on the entirety of the 3D fluid domain, we only have to compute them on the surface of the model. So, what does this get us? Well, these amazing fluid librarians and all of these fair-fluid simulations, but faster. So remember, the first work did something new, but took a very long time and the second work improved it to make it faster and more practical. Please remember this for later in this video. And now, let's fast forward to today's paper, and this new work can also simulate fair-fluids and not only that, but also support a broader range of magnetic phenomena, including rigid and deformable magnetic bodies and two-way coupling too. Oh my, that is sensational! But first, what do these terms mean exactly? Let's perform four experiments and after you watch them, I promise that you will understand all about them. Let's look at the rigid bodies first in experiment number one. Iron box versus magnet. We are starting out slow, and now we are waiting for the attraction to kick in. And there we go. Wonderful. Experiment number two. Deformable magnetic bodies. In other words, magnetic lotus versus a moving magnet. This one is absolutely beautiful, look at how the petals here are modeled as thin, elastic sheets that dance around in the presence of a moving magnet. And if you think this is dancing, stay tuned, there will be an example with even better dance moves in a moment. And experiment number three. Two-way coupling. We noted this coupling thing earlier, so what does that mean? What coupling means is that here, the water can have an effect on the magnet, and the two-way part means that in return, the magnet can also have an effect on the water as well. This is excellent, because we don't have to think about the limitations of the simulation, we can just drop in nearly anything into our simulation domain, beat a fluid, solid, magnetic or not, and we can expect that their interactions are going to be modeled properly. It's standing. And I promised some more dancing, so here goes experiment number four, the dancing, ferrofluid. I love how informative this compass is here. It is a simple object that tells us how an external magnetic field evolves over time. I love this elegant solution. Normally, we have to visualize the magnetic induction lines so we can better see why the tentacles of a magnetic octopus move, or why two ferrofluid droplets repel or attract each other. In this case, the authors opted for a much more concise and elegant solution, and I also like that the compass is not just a 2D overlay, but a properly shaded 3D object with specular reflections as well. Excellent attention to detail. This is really my kind of paper. Now, these simulations were not run on any kind of supercomputer or a network of computers, this runs on the processor of your consumer machine at home. However, simulating even the simpler scenes takes hours. For more complex scenes, even days. And that's not all, the ferrofluid with the YenYang symbol took nearly a month to compute. So is that a problem? No, no, of course not. Not in the slightest, because thanks to this paper, general magnetic simulations that were previously impossible are now possible and don't forget research is a process. As you saw in the example at the start of this video, with the surface only ferrofluid formulation, it may become much faster, just one more paper down the line. I wanted to show you the first two papers in this video to demonstrate how quickly that can happen. And two more papers down the line, oh my, then the sky is the limit. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karojona Ifehid."}, {"start": 5.0, "end": 10.16, "text": " Have a look at these beautiful, fair-fluid simulations from a previous paper."}, {"start": 10.16, "end": 16.240000000000002, "text": " These are fluids that have magnetic properties and thus respond to an external magnetic field,"}, {"start": 16.240000000000002, "end": 21.84, "text": " and yes, you are seeing correctly they are able to even climb things."}, {"start": 21.84, "end": 26.6, "text": " And the best part is that the simulator was so accurate that we could run it side by"}, {"start": 26.6, "end": 31.8, "text": " side with real life footage and see that they run very similarly."}, {"start": 31.8, "end": 33.2, "text": " Excellent!"}, {"start": 33.2, "end": 37.56, "text": " Now, running these simulations took a considerable amount of time."}, {"start": 37.56, "end": 43.72, "text": " To address this, a follow-up paper appeared that showcased a surface-only formulation."}, {"start": 43.72, "end": 45.120000000000005, "text": " What does that mean?"}, {"start": 45.120000000000005, "end": 50.760000000000005, "text": " Well, a key observation here was that for a class of fair-fluids, we don't have to compute"}, {"start": 50.760000000000005, "end": 56.28, "text": " how the magnetic forces act on the entirety of the 3D fluid domain, we only have to"}, {"start": 56.28, "end": 59.08, "text": " compute them on the surface of the model."}, {"start": 59.08, "end": 61.480000000000004, "text": " So, what does this get us?"}, {"start": 61.480000000000004, "end": 69.52, "text": " Well, these amazing fluid librarians and all of these fair-fluid simulations, but faster."}, {"start": 69.52, "end": 75.68, "text": " So remember, the first work did something new, but took a very long time and the second"}, {"start": 75.68, "end": 79.72, "text": " work improved it to make it faster and more practical."}, {"start": 79.72, "end": 82.88, "text": " Please remember this for later in this video."}, {"start": 82.88, "end": 89.0, "text": " And now, let's fast forward to today's paper, and this new work can also simulate fair-fluids"}, {"start": 89.0, "end": 95.32, "text": " and not only that, but also support a broader range of magnetic phenomena, including rigid"}, {"start": 95.32, "end": 99.88, "text": " and deformable magnetic bodies and two-way coupling too."}, {"start": 99.88, "end": 103.47999999999999, "text": " Oh my, that is sensational!"}, {"start": 103.47999999999999, "end": 106.8, "text": " But first, what do these terms mean exactly?"}, {"start": 106.8, "end": 111.91999999999999, "text": " Let's perform four experiments and after you watch them, I promise that you will understand"}, {"start": 111.92, "end": 113.48, "text": " all about them."}, {"start": 113.48, "end": 117.68, "text": " Let's look at the rigid bodies first in experiment number one."}, {"start": 117.68, "end": 120.24000000000001, "text": " Iron box versus magnet."}, {"start": 120.24000000000001, "end": 126.72, "text": " We are starting out slow, and now we are waiting for the attraction to kick in."}, {"start": 126.72, "end": 128.68, "text": " And there we go."}, {"start": 128.68, "end": 131.2, "text": " Wonderful."}, {"start": 131.2, "end": 133.08, "text": " Experiment number two."}, {"start": 133.08, "end": 134.8, "text": " Deformable magnetic bodies."}, {"start": 134.8, "end": 139.12, "text": " In other words, magnetic lotus versus a moving magnet."}, {"start": 139.12, "end": 145.16, "text": " This one is absolutely beautiful, look at how the petals here are modeled as thin, elastic"}, {"start": 145.16, "end": 149.96, "text": " sheets that dance around in the presence of a moving magnet."}, {"start": 149.96, "end": 155.18, "text": " And if you think this is dancing, stay tuned, there will be an example with even better"}, {"start": 155.18, "end": 157.44, "text": " dance moves in a moment."}, {"start": 157.44, "end": 159.48000000000002, "text": " And experiment number three."}, {"start": 159.48000000000002, "end": 160.68, "text": " Two-way coupling."}, {"start": 160.68, "end": 164.92000000000002, "text": " We noted this coupling thing earlier, so what does that mean?"}, {"start": 164.92, "end": 169.79999999999998, "text": " What coupling means is that here, the water can have an effect on the magnet, and the"}, {"start": 169.79999999999998, "end": 176.27999999999997, "text": " two-way part means that in return, the magnet can also have an effect on the water as well."}, {"start": 176.27999999999997, "end": 181.04, "text": " This is excellent, because we don't have to think about the limitations of the simulation,"}, {"start": 181.04, "end": 187.11999999999998, "text": " we can just drop in nearly anything into our simulation domain, beat a fluid, solid,"}, {"start": 187.11999999999998, "end": 193.72, "text": " magnetic or not, and we can expect that their interactions are going to be modeled properly."}, {"start": 193.72, "end": 195.24, "text": " It's standing."}, {"start": 195.24, "end": 201.24, "text": " And I promised some more dancing, so here goes experiment number four, the dancing,"}, {"start": 201.24, "end": 202.24, "text": " ferrofluid."}, {"start": 202.24, "end": 205.6, "text": " I love how informative this compass is here."}, {"start": 205.6, "end": 211.96, "text": " It is a simple object that tells us how an external magnetic field evolves over time."}, {"start": 211.96, "end": 214.68, "text": " I love this elegant solution."}, {"start": 214.68, "end": 219.88, "text": " Normally, we have to visualize the magnetic induction lines so we can better see why the"}, {"start": 219.88, "end": 227.07999999999998, "text": " tentacles of a magnetic octopus move, or why two ferrofluid droplets repel or attract"}, {"start": 227.07999999999998, "end": 228.64, "text": " each other."}, {"start": 228.64, "end": 234.16, "text": " In this case, the authors opted for a much more concise and elegant solution, and I also"}, {"start": 234.16, "end": 241.0, "text": " like that the compass is not just a 2D overlay, but a properly shaded 3D object with specular"}, {"start": 241.0, "end": 243.6, "text": " reflections as well."}, {"start": 243.6, "end": 245.4, "text": " Excellent attention to detail."}, {"start": 245.4, "end": 248.04, "text": " This is really my kind of paper."}, {"start": 248.04, "end": 254.32, "text": " Now, these simulations were not run on any kind of supercomputer or a network of computers,"}, {"start": 254.32, "end": 258.32, "text": " this runs on the processor of your consumer machine at home."}, {"start": 258.32, "end": 263.96, "text": " However, simulating even the simpler scenes takes hours."}, {"start": 263.96, "end": 267.15999999999997, "text": " For more complex scenes, even days."}, {"start": 267.15999999999997, "end": 273.52, "text": " And that's not all, the ferrofluid with the YenYang symbol took nearly a month to compute."}, {"start": 273.52, "end": 276.2, "text": " So is that a problem?"}, {"start": 276.2, "end": 278.4, "text": " No, no, of course not."}, {"start": 278.4, "end": 283.47999999999996, "text": " Not in the slightest, because thanks to this paper, general magnetic simulations that were"}, {"start": 283.47999999999996, "end": 289.52, "text": " previously impossible are now possible and don't forget research is a process."}, {"start": 289.52, "end": 293.8, "text": " As you saw in the example at the start of this video, with the surface only ferrofluid"}, {"start": 293.8, "end": 299.44, "text": " formulation, it may become much faster, just one more paper down the line."}, {"start": 299.44, "end": 304.12, "text": " I wanted to show you the first two papers in this video to demonstrate how quickly that"}, {"start": 304.12, "end": 305.8, "text": " can happen."}, {"start": 305.8, "end": 311.12, "text": " And two more papers down the line, oh my, then the sky is the limit."}, {"start": 311.12, "end": 312.76, "text": " What a time to be alive."}, {"start": 312.76, "end": 316.24, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 316.24, "end": 322.2, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 322.2, "end": 329.2, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances."}, {"start": 329.2, "end": 335.76, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and"}, {"start": 335.76, "end": 336.76, "text": " Azure."}, {"start": 336.76, "end": 342.08, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 342.08, "end": 348.48, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 348.48, "end": 350.28, "text": " workstations or servers."}, {"start": 350.28, "end": 355.64, "text": " Make sure to go to lambdaleps.com slash papers to sign up for one of their amazing GPU"}, {"start": 355.64, "end": 357.0, "text": " instances today."}, {"start": 357.0, "end": 361.76, "text": " Our thanks to Lambda for their longstanding support and for helping us make better videos"}, {"start": 361.76, "end": 362.76, "text": " for you."}, {"start": 362.76, "end": 366.59999999999997, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9kllWAX9tHw
Differentiable Material Synthesis Is Amazing! ☀️
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "MATch: Differentiable Material Graphs for Procedural Material Capture" is available here: http://match.csail.mit.edu/ 📝 Our Photorealistic Material Editing paper is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/photorealistic-material-editing/ ☀️ The free course on writing light simulations is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image: https://pixabay.com/images/id-4238615/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jolene-Fehir. I am a light transport researcher by trade and I am very happy today because we have an absolutely amazing light transport paper we are going to enjoy today. As many of you know, we write these programs that you can run on your computer to simulate millions and millions of light rays and calculate how they get absorbed or scattered off of our objects in a virtual scene. Initially, we start out with a really noisy image and as we add more rays, the image gets clearer and clearer over time. We can also simulate sophisticated material models in these programs. A modern way of doing that is through using these material nodes. With these, we can conjure up a ton of different material models and change their physical properties to our liking. As you see, they are very expressive indeed, however, the more nodes we use, the less clear it becomes how they interact with each other. And as you see, every time we change something, we have to wait until a new image is rendered. That is very time consuming and more importantly, we have to have some material modeling expertise to use this. This concept is very powerful. For instance, I think if you watch the Perceptilebs sponsorship spot at the end of this video, you will be very surprised to see that they also use node groups, but with theirs, you don't build material models, you can build machine learning models. What would also be really cool if we could just give the machine a photo and it would figure out how to set up these nodes so it looks exactly like the material in the photo. So, is that possible or is that science fiction? Well, have a look at our paper called photorealistic material editing. With this technique, we can easily create these beautiful material models in a matter of seconds, even if we don't know a thing about light transport simulations. It does something that is similar to what many call differentiable rendering. Here is the workflow, we give it a bunch of images like these which were created on this particular test scene and it guesses what parameters to use to get these material models. Now, of course, this doesn't make any sense whatsoever because we have produced these images ourselves so we know exactly what parameters to use to produce this. In other words, this thing seems useless and now comes the magic part because we don't use these images. No, no, we load them into Photoshop and edit them to our liking and just pretend that these images were created with the light simulation program. This means that we can create a lot of quickly and really poorly executed edits. For instance, the stitched specular highlight in the first example isn't very well done and neither is the background of the gold target image in the middle. However, the key observation is that we have built a mathematical framework which makes this pretending really work. Look, in the next step our method proceeds to find a photorealistic material description that, when rendered, resembles this target image and works well even in the presence of these poorly executed edits. So these materials are completely made up in Photoshop and it turns out we can create photorealistic materials through these notegraphs that look almost exactly the same, quite remarkable. And the whole process executes in 20 seconds. If you are one of the more curious fellow scholars out there, this paper and its source code are available in the video description. Now, this differentiable thing has a lot of steam. For instance, there are more works on differentiable rendering. In this other work, we can take a photo of a scene and the learning-based method turns the knobs until it finds a digital object that matches its geometry and material properties. This was a stunning piece of work from Vence Ayakob and his group. Of course, who else? They are some of the best in the business. And we don't even need to be in the area of light transport simulations to enjoy the benefits of differentiable formulations, for instance, this is differentiable physics. So, what is that? Imagine that we have this billiard game where we would like to hit the white ball with just the right amount of force and from the right direction, such that the blue ball ends up close to the black spot. Well, this example shows that this is unlikely to happen by chance and we have to engage in a fair amount of trial and error to make this happen. What this differentiable programming system does for us is that we can specify an end state which is the blue ball on the black dot, and it is able to compute the required forces and angles to make this happen. Very close. So, after you look here, maybe you can now guess what's next for this differentiable technique. It starts out with a piece of simulated ink with a checkerboard pattern and it exerts just the appropriate forces so that it forms exactly the Yin Yang symbol shortly after. And now that we understand what differentiable techniques are capable of, we are ready to proceed to today's paper. This is a proper, fully differentiable material capture technique for real photographs. All this needs is one flash photograph of a real world material. We have those around us in abundance and similarly to our previous method, it sets up the material notes for it. That is a good thing because I don't know about you, but I do not want to touch this mess at all. Luckily, we don't have to. Look, the left is the target photo and the right is the initial guess of the algorithm that is not bad but also not very close. And now, hold on to your papers and just look at how it proceeds to refine this material until it closely matches the target. And with that, we have a digital representation of these materials. We can now easily build a library of these materials and assign them to objects in our scene. And then, we run the light simulation program and here we go. Beautiful. At this point, if we feel adventurous, we can adjust small things in the material graphs to create a digital material that is more in line with our artistic vision. That is great because it is much easier to adjust an already existing material model than creating one from scratch. So, what are the key differences between our work from last year and this new paper? Our work made a rough initial guess and optimized the parameters afterwards. It was also chock full of neural networks. It also created materials from a sample, but that sample was not a photograph, but a photoshopped image. That is really cool. However, this new method takes an almost arbitrary photo. Many of these we can take ourselves or even get them from the internet. Therefore, this new method is more general. It also supports 131 different material node types which is insanity. Huge congratulations to the authors. If I would be an artist, I would want to work with this right about now. What a time to be alive. So, there you go. This was quite a ride and I hope you enjoyed it. Just half as much as I did. And if you enjoyed it, at least as much as I did. And you feel a little stranded at home and are thinking that this light transport thing is pretty cool. And you would like to learn more about it. I held a master level course on this topic at the Technical University of Vienna. Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone. Free education for everyone, that's what I want. So, the course is available free of charge for everyone. No strings attached. Make sure to click the link in the video description to get started. We write a full light simulation program from scratch there and learn about physics, the world around us and more perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilebs for their support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Kato Jolene-Fehir."}, {"start": 4.8, "end": 14.3, "text": " I am a light transport researcher by trade and I am very happy today because we have an absolutely amazing light transport paper we are going to enjoy today."}, {"start": 14.3, "end": 27.1, "text": " As many of you know, we write these programs that you can run on your computer to simulate millions and millions of light rays and calculate how they get absorbed or scattered off of our objects in a virtual scene."}, {"start": 27.1, "end": 35.800000000000004, "text": " Initially, we start out with a really noisy image and as we add more rays, the image gets clearer and clearer over time."}, {"start": 35.800000000000004, "end": 40.400000000000006, "text": " We can also simulate sophisticated material models in these programs."}, {"start": 40.400000000000006, "end": 44.900000000000006, "text": " A modern way of doing that is through using these material nodes."}, {"start": 44.900000000000006, "end": 52.400000000000006, "text": " With these, we can conjure up a ton of different material models and change their physical properties to our liking."}, {"start": 52.4, "end": 61.5, "text": " As you see, they are very expressive indeed, however, the more nodes we use, the less clear it becomes how they interact with each other."}, {"start": 61.5, "end": 67.8, "text": " And as you see, every time we change something, we have to wait until a new image is rendered."}, {"start": 67.8, "end": 75.0, "text": " That is very time consuming and more importantly, we have to have some material modeling expertise to use this."}, {"start": 75.0, "end": 93.0, "text": " This concept is very powerful. For instance, I think if you watch the Perceptilebs sponsorship spot at the end of this video, you will be very surprised to see that they also use node groups, but with theirs, you don't build material models, you can build machine learning models."}, {"start": 93.0, "end": 104.1, "text": " What would also be really cool if we could just give the machine a photo and it would figure out how to set up these nodes so it looks exactly like the material in the photo."}, {"start": 104.1, "end": 113.5, "text": " So, is that possible or is that science fiction? Well, have a look at our paper called photorealistic material editing."}, {"start": 113.5, "end": 123.5, "text": " With this technique, we can easily create these beautiful material models in a matter of seconds, even if we don't know a thing about light transport simulations."}, {"start": 123.5, "end": 129.0, "text": " It does something that is similar to what many call differentiable rendering."}, {"start": 129.0, "end": 139.9, "text": " Here is the workflow, we give it a bunch of images like these which were created on this particular test scene and it guesses what parameters to use to get these material models."}, {"start": 139.9, "end": 151.1, "text": " Now, of course, this doesn't make any sense whatsoever because we have produced these images ourselves so we know exactly what parameters to use to produce this."}, {"start": 151.1, "end": 159.0, "text": " In other words, this thing seems useless and now comes the magic part because we don't use these images."}, {"start": 159.0, "end": 169.0, "text": " No, no, we load them into Photoshop and edit them to our liking and just pretend that these images were created with the light simulation program."}, {"start": 169.0, "end": 174.5, "text": " This means that we can create a lot of quickly and really poorly executed edits."}, {"start": 174.5, "end": 183.5, "text": " For instance, the stitched specular highlight in the first example isn't very well done and neither is the background of the gold target image in the middle."}, {"start": 183.5, "end": 191.4, "text": " However, the key observation is that we have built a mathematical framework which makes this pretending really work."}, {"start": 191.4, "end": 205.8, "text": " Look, in the next step our method proceeds to find a photorealistic material description that, when rendered, resembles this target image and works well even in the presence of these poorly executed edits."}, {"start": 205.8, "end": 218.4, "text": " So these materials are completely made up in Photoshop and it turns out we can create photorealistic materials through these notegraphs that look almost exactly the same, quite remarkable."}, {"start": 218.4, "end": 222.0, "text": " And the whole process executes in 20 seconds."}, {"start": 222.0, "end": 229.5, "text": " If you are one of the more curious fellow scholars out there, this paper and its source code are available in the video description."}, {"start": 229.5, "end": 233.3, "text": " Now, this differentiable thing has a lot of steam."}, {"start": 233.3, "end": 236.6, "text": " For instance, there are more works on differentiable rendering."}, {"start": 236.6, "end": 248.3, "text": " In this other work, we can take a photo of a scene and the learning-based method turns the knobs until it finds a digital object that matches its geometry and material properties."}, {"start": 248.3, "end": 254.70000000000002, "text": " This was a stunning piece of work from Vence Ayakob and his group."}, {"start": 254.70000000000002, "end": 258.7, "text": " Of course, who else? They are some of the best in the business."}, {"start": 258.7, "end": 269.90000000000003, "text": " And we don't even need to be in the area of light transport simulations to enjoy the benefits of differentiable formulations, for instance, this is differentiable physics."}, {"start": 269.9, "end": 284.0, "text": " So, what is that? Imagine that we have this billiard game where we would like to hit the white ball with just the right amount of force and from the right direction, such that the blue ball ends up close to the black spot."}, {"start": 284.0, "end": 292.79999999999995, "text": " Well, this example shows that this is unlikely to happen by chance and we have to engage in a fair amount of trial and error to make this happen."}, {"start": 292.8, "end": 305.7, "text": " What this differentiable programming system does for us is that we can specify an end state which is the blue ball on the black dot, and it is able to compute the required forces and angles to make this happen."}, {"start": 308.2, "end": 309.7, "text": " Very close."}, {"start": 309.7, "end": 315.40000000000003, "text": " So, after you look here, maybe you can now guess what's next for this differentiable technique."}, {"start": 315.4, "end": 328.29999999999995, "text": " It starts out with a piece of simulated ink with a checkerboard pattern and it exerts just the appropriate forces so that it forms exactly the Yin Yang symbol shortly after."}, {"start": 328.29999999999995, "end": 335.5, "text": " And now that we understand what differentiable techniques are capable of, we are ready to proceed to today's paper."}, {"start": 335.5, "end": 341.79999999999995, "text": " This is a proper, fully differentiable material capture technique for real photographs."}, {"start": 341.8, "end": 345.90000000000003, "text": " All this needs is one flash photograph of a real world material."}, {"start": 345.90000000000003, "end": 352.7, "text": " We have those around us in abundance and similarly to our previous method, it sets up the material notes for it."}, {"start": 352.7, "end": 359.40000000000003, "text": " That is a good thing because I don't know about you, but I do not want to touch this mess at all."}, {"start": 359.40000000000003, "end": 361.40000000000003, "text": " Luckily, we don't have to."}, {"start": 361.40000000000003, "end": 370.6, "text": " Look, the left is the target photo and the right is the initial guess of the algorithm that is not bad but also not very close."}, {"start": 370.6, "end": 378.6, "text": " And now, hold on to your papers and just look at how it proceeds to refine this material until it closely matches the target."}, {"start": 378.6, "end": 383.5, "text": " And with that, we have a digital representation of these materials."}, {"start": 383.5, "end": 390.5, "text": " We can now easily build a library of these materials and assign them to objects in our scene."}, {"start": 390.5, "end": 395.5, "text": " And then, we run the light simulation program and here we go."}, {"start": 395.5, "end": 397.0, "text": " Beautiful."}, {"start": 397.0, "end": 407.6, "text": " At this point, if we feel adventurous, we can adjust small things in the material graphs to create a digital material that is more in line with our artistic vision."}, {"start": 407.6, "end": 415.2, "text": " That is great because it is much easier to adjust an already existing material model than creating one from scratch."}, {"start": 415.2, "end": 421.6, "text": " So, what are the key differences between our work from last year and this new paper?"}, {"start": 421.6, "end": 429.0, "text": " Our work made a rough initial guess and optimized the parameters afterwards. It was also chock full of neural networks."}, {"start": 429.0, "end": 437.0, "text": " It also created materials from a sample, but that sample was not a photograph, but a photoshopped image."}, {"start": 437.0, "end": 442.70000000000005, "text": " That is really cool. However, this new method takes an almost arbitrary photo."}, {"start": 442.70000000000005, "end": 447.40000000000003, "text": " Many of these we can take ourselves or even get them from the internet."}, {"start": 447.40000000000003, "end": 450.20000000000005, "text": " Therefore, this new method is more general."}, {"start": 450.2, "end": 456.59999999999997, "text": " It also supports 131 different material node types which is insanity."}, {"start": 456.59999999999997, "end": 463.09999999999997, "text": " Huge congratulations to the authors. If I would be an artist, I would want to work with this right about now."}, {"start": 463.09999999999997, "end": 471.09999999999997, "text": " What a time to be alive. So, there you go. This was quite a ride and I hope you enjoyed it. Just half as much as I did."}, {"start": 471.09999999999997, "end": 474.09999999999997, "text": " And if you enjoyed it, at least as much as I did."}, {"start": 474.1, "end": 480.0, "text": " And you feel a little stranded at home and are thinking that this light transport thing is pretty cool."}, {"start": 480.0, "end": 487.0, "text": " And you would like to learn more about it. I held a master level course on this topic at the Technical University of Vienna."}, {"start": 487.0, "end": 500.5, "text": " Since I was always teaching it to a handful of motivated students, I thought that the teachings shouldn't only be available for the privileged few who can afford a college education, but the teachings should be available for everyone."}, {"start": 500.5, "end": 507.4, "text": " Free education for everyone, that's what I want. So, the course is available free of charge for everyone."}, {"start": 507.4, "end": 512.2, "text": " No strings attached. Make sure to click the link in the video description to get started."}, {"start": 512.2, "end": 522.6, "text": " We write a full light simulation program from scratch there and learn about physics, the world around us and more perceptilebs is a visual API for TensorFlow,"}, {"start": 522.6, "end": 536.5, "text": " carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs and how to debug it."}, {"start": 536.5, "end": 550.9, "text": " Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically."}, {"start": 550.9, "end": 563.3, "text": " I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers to easily install the free local version of their system today."}, {"start": 563.3, "end": 568.8, "text": " Our thanks to perceptilebs for their support and for helping us make better videos for you."}, {"start": 568.8, "end": 581.6999999999999, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-Ny-p-CHNyM
Finally, Instant Monsters! 🐉
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/jxmorris12/huggingface-demo/reports/A-Step-by-Step-Guide-to-Tracking-Hugging-Face-Model-Performance--VmlldzoxMDE2MTU 📝 The paper "Monster Mash: A Single-View Approach to Casual 3D Modeling and Animation" is available here: https://dcgi.fel.cvut.cz/home/sykorad/monster_mash Web demo - make sure to click "Help" and read the instructions: http://monstermash.zone/# More on Flow by Mihály Csíkszentmihályi - it is immensely important to master this! https://www.youtube.com/watch?v=8h6IMYRoCZw 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Jornai Fahir, if we wish to create an adorable virtual monster and animate it, we first have to engage in 3D modeling. Then, if we wish to make it move and make this movement believable, we have to specify where the bones and joints are located within the model. This process is called rigging. As you see, it is quite laborious and requires expertise in this domain to pull this off. And now, imagine this process with no 3D modeling and no rigging. This is, of course, impossible. Right? Well, now you know what's coming, so hold onto your papers because here is a newer technique that indeed performs the impossible. All we need to do is grab a pencil and create a rough sketch of our character, then it will take a big breath and inflate it into a 3D model. This process was nearly 7 times faster than the classical workflow, but what matters even more, this new workflow requires zero expertise in 3D modeling and rigging. This means that with this technique, absolutely anybody can become an artist. So, we noted that these models can also be animated. Is that so? Yes, that's right. We can indeed animate these models by using these red control points, and even better, we get to specify where these points go. That's a good thing because we can make sure that the prescribed part can move around opening up the possibility of creating and animating a wide range of characters. And I would say all this can be done in a matter of minutes, but it's even better sometimes even within a minute. Whoa! This new technique does a lot of lagwork that previous methods were not able to pull off so well. For instance, it takes a little information about which part is in front or behind the model. Then, it stitches all of these strokes together and inflates are drawing into a 3D model, and it does this better than previous methods. Look. Well, okay, the new one looks a bit better where the body parts connect here, and that's it. Wait a second. Aha! Somebody didn't do their job correctly. And we went from this work to this in just two years. This progress is absolute insanity. Now, let's have a look at a full workflow from start to end. First, we draw the strokes, note that we can specify that one arm and leg is in front of the body, and the other one is behind, and bam! The 3D model is now done. Wow! That was quick! And now, add the little red control points for animation, and let the fun begin. Mr. your paper has been officially accepted. Move the feet, pin the hands, rock the body. Wait, not only that, but this paper was accepted to the C-Graph Asia conference, which is equivalent to winning an Olympic gold medal in computer graphics research, if you will. So add the little neck movement too. Oh yeah, now we're talking. With this technique, the possibilities really feel endless. We can animate humanoids, monsters, other adorable creatures, or can even make scientific illustrations come to life without any modeling and rigging expertise. Do you remember this earlier video where we could paint on a piece of 3D geometry and transfer its properties onto a 3D model? This method can be combined with that too. Yum! And in case you're wondering how quick this combination is, my goodness, very, very quick. Now, this technique is also not perfect. One of the limitations of this single view drawing workflow is that we only have limited control over the proportions in depth. The drawing occludes regions is also not that easy. The authors proposed possible solutions to these limitations in the paper, so make sure to have a look in the video description. And it appears to me that with a little polishing, this may be ready to go for artistic projects right now. If you have a closer look, you will also see that this work also cites the flow paper from Mihai Chicks and Mihai. Extra start points for that. And with that said, when can we use this? And here comes the best part right now. The authors really put their papers where their mouth is, or in other words, the source code for this project is available. Also, there is an online demo. Woohoo! The link is available in the video description. Make sure to read the instructions before you start. So, there you go. Instant 3D models with animation without requiring 3D modeling and rigging expertise. What do you think? Let me know in the comments below. This episode has been supported by weights and biases. In this post, they show you how to use transformers from the Hanging Face library and how to track your model performance. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karo Jornai Fahir,"}, {"start": 4.5600000000000005, "end": 12.4, "text": " if we wish to create an adorable virtual monster and animate it, we first have to engage in 3D modeling."}, {"start": 12.4, "end": 17.12, "text": " Then, if we wish to make it move and make this movement believable,"}, {"start": 17.12, "end": 22.080000000000002, "text": " we have to specify where the bones and joints are located within the model."}, {"start": 22.080000000000002, "end": 24.96, "text": " This process is called rigging."}, {"start": 24.96, "end": 31.84, "text": " As you see, it is quite laborious and requires expertise in this domain to pull this off."}, {"start": 31.84, "end": 38.56, "text": " And now, imagine this process with no 3D modeling and no rigging."}, {"start": 38.56, "end": 41.36, "text": " This is, of course, impossible."}, {"start": 41.36, "end": 42.36, "text": " Right?"}, {"start": 42.36, "end": 48.84, "text": " Well, now you know what's coming, so hold onto your papers because here is a newer technique"}, {"start": 48.84, "end": 51.400000000000006, "text": " that indeed performs the impossible."}, {"start": 51.4, "end": 56.76, "text": " All we need to do is grab a pencil and create a rough sketch of our character,"}, {"start": 56.76, "end": 62.519999999999996, "text": " then it will take a big breath and inflate it into a 3D model."}, {"start": 62.519999999999996, "end": 67.0, "text": " This process was nearly 7 times faster than the classical workflow,"}, {"start": 67.0, "end": 74.84, "text": " but what matters even more, this new workflow requires zero expertise in 3D modeling and rigging."}, {"start": 74.84, "end": 80.2, "text": " This means that with this technique, absolutely anybody can become an artist."}, {"start": 80.2, "end": 84.2, "text": " So, we noted that these models can also be animated."}, {"start": 84.2, "end": 85.88, "text": " Is that so?"}, {"start": 85.88, "end": 87.4, "text": " Yes, that's right."}, {"start": 87.4, "end": 91.8, "text": " We can indeed animate these models by using these red control points,"}, {"start": 91.8, "end": 96.2, "text": " and even better, we get to specify where these points go."}, {"start": 96.2, "end": 101.4, "text": " That's a good thing because we can make sure that the prescribed part can move around"}, {"start": 101.4, "end": 107.4, "text": " opening up the possibility of creating and animating a wide range of characters."}, {"start": 107.4, "end": 111.4, "text": " And I would say all this can be done in a matter of minutes,"}, {"start": 111.4, "end": 116.60000000000001, "text": " but it's even better sometimes even within a minute."}, {"start": 116.60000000000001, "end": 118.60000000000001, "text": " Whoa!"}, {"start": 118.60000000000001, "end": 124.76, "text": " This new technique does a lot of lagwork that previous methods were not able to pull off so well."}, {"start": 124.76, "end": 131.0, "text": " For instance, it takes a little information about which part is in front or behind the model."}, {"start": 131.0, "end": 137.8, "text": " Then, it stitches all of these strokes together and inflates are drawing into a 3D model,"}, {"start": 137.8, "end": 142.6, "text": " and it does this better than previous methods. Look."}, {"start": 142.6, "end": 149.8, "text": " Well, okay, the new one looks a bit better where the body parts connect here, and that's it."}, {"start": 149.8, "end": 152.2, "text": " Wait a second."}, {"start": 152.2, "end": 157.0, "text": " Aha! Somebody didn't do their job correctly."}, {"start": 157.0, "end": 161.8, "text": " And we went from this work to this in just two years."}, {"start": 161.8, "end": 165.0, "text": " This progress is absolute insanity."}, {"start": 165.0, "end": 169.0, "text": " Now, let's have a look at a full workflow from start to end."}, {"start": 169.0, "end": 176.2, "text": " First, we draw the strokes, note that we can specify that one arm and leg is in front of the body,"}, {"start": 176.2, "end": 179.4, "text": " and the other one is behind, and bam!"}, {"start": 179.4, "end": 182.6, "text": " The 3D model is now done."}, {"start": 182.6, "end": 184.6, "text": " Wow! That was quick!"}, {"start": 184.6, "end": 190.6, "text": " And now, add the little red control points for animation, and let the fun begin."}, {"start": 190.6, "end": 194.6, "text": " Mr. your paper has been officially accepted."}, {"start": 194.6, "end": 198.6, "text": " Move the feet, pin the hands, rock the body."}, {"start": 198.6, "end": 203.79999999999998, "text": " Wait, not only that, but this paper was accepted to the C-Graph Asia conference,"}, {"start": 203.79999999999998, "end": 209.4, "text": " which is equivalent to winning an Olympic gold medal in computer graphics research, if you will."}, {"start": 209.4, "end": 212.6, "text": " So add the little neck movement too."}, {"start": 212.6, "end": 214.6, "text": " Oh yeah, now we're talking."}, {"start": 214.6, "end": 218.6, "text": " With this technique, the possibilities really feel endless."}, {"start": 218.6, "end": 230.6, "text": " We can animate humanoids, monsters, other adorable creatures, or can even make scientific illustrations come to life without any modeling and rigging expertise."}, {"start": 230.6, "end": 240.6, "text": " Do you remember this earlier video where we could paint on a piece of 3D geometry and transfer its properties onto a 3D model?"}, {"start": 240.6, "end": 244.6, "text": " This method can be combined with that too."}, {"start": 246.6, "end": 248.6, "text": " Yum!"}, {"start": 248.6, "end": 256.6, "text": " And in case you're wondering how quick this combination is, my goodness, very, very quick."}, {"start": 256.6, "end": 258.6, "text": " Now, this technique is also not perfect."}, {"start": 258.6, "end": 266.6, "text": " One of the limitations of this single view drawing workflow is that we only have limited control over the proportions in depth."}, {"start": 266.6, "end": 270.6, "text": " The drawing occludes regions is also not that easy."}, {"start": 270.6, "end": 276.6, "text": " The authors proposed possible solutions to these limitations in the paper, so make sure to have a look in the video description."}, {"start": 276.6, "end": 284.6, "text": " And it appears to me that with a little polishing, this may be ready to go for artistic projects right now."}, {"start": 284.6, "end": 290.6, "text": " If you have a closer look, you will also see that this work also cites the flow paper from Mihai Chicks and Mihai."}, {"start": 290.6, "end": 292.6, "text": " Extra start points for that."}, {"start": 292.6, "end": 296.6, "text": " And with that said, when can we use this?"}, {"start": 296.6, "end": 298.6, "text": " And here comes the best part right now."}, {"start": 298.6, "end": 306.6, "text": " The authors really put their papers where their mouth is, or in other words, the source code for this project is available."}, {"start": 306.6, "end": 310.6, "text": " Also, there is an online demo."}, {"start": 310.6, "end": 310.6, "text": " Woohoo!"}, {"start": 310.6, "end": 316.6, "text": " The link is available in the video description. Make sure to read the instructions before you start."}, {"start": 316.6, "end": 324.6, "text": " So, there you go. Instant 3D models with animation without requiring 3D modeling and rigging expertise."}, {"start": 324.6, "end": 328.6, "text": " What do you think? Let me know in the comments below."}, {"start": 328.6, "end": 338.6, "text": " This episode has been supported by weights and biases. In this post, they show you how to use transformers from the Hanging Face library and how to track your model performance."}, {"start": 338.6, "end": 344.6, "text": " During my PhD studies, I trained a ton of neural networks which were used in our experiments."}, {"start": 344.6, "end": 352.6, "text": " However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight."}, {"start": 352.6, "end": 357.6, "text": " And that's exactly how weights and biases helps you by organizing your experiments."}, {"start": 357.6, "end": 365.6, "text": " It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more."}, {"start": 365.6, "end": 372.6, "text": " And get this, weights and biases is free for all individuals, academics, and open source projects."}, {"start": 372.6, "end": 381.6, "text": " Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today."}, {"start": 381.6, "end": 387.6, "text": " Our thanks to weights and biases for their long-standing support and for helping us make better videos for you."}, {"start": 387.6, "end": 411.6, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=2wcw_O_19XQ
This is What Abraham Lincoln May Have Looked Like! 🎩
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/instacolorization/reports/Overview-Instance-Aware-Image-Colorization---VmlldzoyOTk3MDI 📝 The paper "Time-Travel Rephotography" is available here: https://time-travel-rephotography.github.io/ 📝 Our "Separable Subsurface Scattering" paper with Activision Blizzard is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjona Ifehir. Today we are going to travel in time. With the ascendancy of Neural Network Based Learning Techniques, this previous method enables us to take an old, old black and white movie that suffers from a lot of problems like missing data, flickering and more, give it to a Neural Network and have it restored for us. And here you can not only see how much better this restored version is, but it took it one step further. It also performed colorization. Essentially here we could produce six colorized reference images and the Neural Network uses them as our direction and propagates all this information to the remainder of the frames. So this work did restoration and colorization at the same time. This was absolutely amazing and now comes something even better. Today we have a new piece of work that performs not only restoration and colorization, but super resolution as well. What this means is that we can take an antique photo which suffers from a lot of issues. Look, these old films exaggerate wrinkles, a great deal, they even darken the lips and do funny things with red colors. For instance, subsurface scattering is also missing. This is light penetrating our skin and bouncing inside before coming out again. And the lack of this effect is why the skin looks a little plasticky here. Luckily we can simulate all these phenomena on our computers. I am a light transport researcher by trade and this is from our earlier paper with the Activision Blizzard game development company. This is the same phenomenon, a simulation without subsurface scattering and this one is with simulating this effect. Beautiful. You can find a link to this paper in the video description. So with all these problems with the antique photos, our question is what did Lincoln really look like? Well, let's try an earlier framework for restoration, colorization, and super resolution and... Well, unfortunately most of our issues still remain. Lots of exaggerated wrinkles, plasticky look, lots of detail missing. Can we do better? Well, hold on to your papers and observe the output with the new technique. Wow! The restoration indeed took place properly, brought the wrinkles down to a much more realistic level. Skin looks like skin because of subsurface scattering and the super resolution part is responsible for a lot of new detail everywhere, but especially around the lips. Outstanding. It truly feels like this photo has been refotographed with a modern camera and with that please meet time travel, refotography. And the curious thing is that all these sounds flat out impossible. Why is that? Since we don't have old and new image pairs of Lincoln and many other historic figures, the question naturally arises in the mind of the curious fellow-scaler, how do we train a neural network to perform this? And the answer is that we need to use their siblings. Now this doesn't mean that Lincoln had a long-lost sibling that we don't know about, what this means is that as the output image is fed through our neural network, we can generate a photorealistic image of someone and this someone kind of resembles the target subject and has all the details filled in. Then in the next step, we can start morphing the sibling until it starts resembling the test subject. With this previously existing Stagen II technique, morphing is now easy to do, but restoration is hard, so essentially with this, we can skip the difficult restoration part and just do the easier morphing instead, trading a difficult problem for an easier one. Absolutely brilliant idea. And if you have been holding onto your paper so far, now squeeze that paper because it can do even more. Age progression. Look, if we only have a few target photos of Thomas Edison throughout his life, these will be our yardsticks and the algorithm is able to generate his aging process between these yardstick images. And the best part is that these images have different lighting, pose, and none of this is an issue for the technique. It just doesn't care and it still works beautifully. Wow. So we saw earlier that there are other methods that attempt to do this too, at least the colorization part. Yes, we have colorization and other techniques in abundance, so how does this compare to those? It appears to outpace all of them really convincingly. The numbers from the user study and the algorithmically generated scores also favored the new technique. This is a huge leap forward. Do you have some other applications in mind for this new technique? Let me know in the comments what you would do with this or how you would like to see it improved. Now, of course, not even this technique is perfect, blurry and noisy regions can still appear here and there. And note that Stagian 2, the basis for this algorithm, came out just a little more than a year ago. And it is amazing that we are witnessing such incredible progress in so little time. My goodness. And just imagine what the next paper down the line will bring. What a time to be alive. What you see here is a report for a previous paper that we covered in this series which was made by Wades and Biosys. Wades and Biosys provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that Wades and Biosys is free for all individuals, academics and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wades and Biosys for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjona Ifehir."}, {"start": 4.8, "end": 7.8, "text": " Today we are going to travel in time."}, {"start": 7.8, "end": 11.1, "text": " With the ascendancy of Neural Network Based Learning Techniques,"}, {"start": 11.1, "end": 15.9, "text": " this previous method enables us to take an old, old black and white movie"}, {"start": 15.9, "end": 21.6, "text": " that suffers from a lot of problems like missing data, flickering and more,"}, {"start": 21.6, "end": 25.400000000000002, "text": " give it to a Neural Network and have it restored for us."}, {"start": 25.4, "end": 30.5, "text": " And here you can not only see how much better this restored version is,"}, {"start": 30.5, "end": 33.8, "text": " but it took it one step further."}, {"start": 33.8, "end": 36.8, "text": " It also performed colorization."}, {"start": 36.8, "end": 41.5, "text": " Essentially here we could produce six colorized reference images"}, {"start": 41.5, "end": 44.599999999999994, "text": " and the Neural Network uses them as our direction"}, {"start": 44.599999999999994, "end": 49.3, "text": " and propagates all this information to the remainder of the frames."}, {"start": 49.3, "end": 54.5, "text": " So this work did restoration and colorization at the same time."}, {"start": 54.5, "end": 59.1, "text": " This was absolutely amazing and now comes something even better."}, {"start": 59.1, "end": 64.7, "text": " Today we have a new piece of work that performs not only restoration and colorization,"}, {"start": 64.7, "end": 67.7, "text": " but super resolution as well."}, {"start": 67.7, "end": 70.9, "text": " What this means is that we can take an antique photo"}, {"start": 70.9, "end": 73.4, "text": " which suffers from a lot of issues."}, {"start": 73.4, "end": 77.9, "text": " Look, these old films exaggerate wrinkles, a great deal,"}, {"start": 77.9, "end": 82.8, "text": " they even darken the lips and do funny things with red colors."}, {"start": 82.8, "end": 86.2, "text": " For instance, subsurface scattering is also missing."}, {"start": 86.2, "end": 91.89999999999999, "text": " This is light penetrating our skin and bouncing inside before coming out again."}, {"start": 91.89999999999999, "end": 96.6, "text": " And the lack of this effect is why the skin looks a little plasticky here."}, {"start": 96.6, "end": 100.7, "text": " Luckily we can simulate all these phenomena on our computers."}, {"start": 100.7, "end": 103.2, "text": " I am a light transport researcher by trade"}, {"start": 103.2, "end": 108.4, "text": " and this is from our earlier paper with the Activision Blizzard game development company."}, {"start": 108.4, "end": 113.30000000000001, "text": " This is the same phenomenon, a simulation without subsurface scattering"}, {"start": 113.30000000000001, "end": 116.80000000000001, "text": " and this one is with simulating this effect."}, {"start": 116.80000000000001, "end": 118.2, "text": " Beautiful."}, {"start": 118.2, "end": 121.7, "text": " You can find a link to this paper in the video description."}, {"start": 121.7, "end": 124.9, "text": " So with all these problems with the antique photos,"}, {"start": 124.9, "end": 128.9, "text": " our question is what did Lincoln really look like?"}, {"start": 128.9, "end": 133.5, "text": " Well, let's try an earlier framework for restoration, colorization,"}, {"start": 133.5, "end": 136.3, "text": " and super resolution and..."}, {"start": 136.3, "end": 140.4, "text": " Well, unfortunately most of our issues still remain."}, {"start": 140.4, "end": 146.4, "text": " Lots of exaggerated wrinkles, plasticky look, lots of detail missing."}, {"start": 146.4, "end": 148.8, "text": " Can we do better?"}, {"start": 148.8, "end": 154.8, "text": " Well, hold on to your papers and observe the output with the new technique."}, {"start": 154.8, "end": 156.3, "text": " Wow!"}, {"start": 156.3, "end": 159.3, "text": " The restoration indeed took place properly,"}, {"start": 159.3, "end": 162.8, "text": " brought the wrinkles down to a much more realistic level."}, {"start": 162.8, "end": 166.60000000000002, "text": " Skin looks like skin because of subsurface scattering"}, {"start": 166.60000000000002, "end": 171.60000000000002, "text": " and the super resolution part is responsible for a lot of new detail everywhere,"}, {"start": 171.60000000000002, "end": 174.60000000000002, "text": " but especially around the lips."}, {"start": 174.60000000000002, "end": 175.8, "text": " Outstanding."}, {"start": 175.8, "end": 180.8, "text": " It truly feels like this photo has been refotographed with a modern camera"}, {"start": 180.8, "end": 184.8, "text": " and with that please meet time travel, refotography."}, {"start": 184.8, "end": 189.8, "text": " And the curious thing is that all these sounds flat out impossible."}, {"start": 189.8, "end": 191.4, "text": " Why is that?"}, {"start": 191.4, "end": 194.70000000000002, "text": " Since we don't have old and new image pairs of Lincoln"}, {"start": 194.70000000000002, "end": 196.8, "text": " and many other historic figures,"}, {"start": 196.8, "end": 201.8, "text": " the question naturally arises in the mind of the curious fellow-scaler,"}, {"start": 201.8, "end": 205.4, "text": " how do we train a neural network to perform this?"}, {"start": 205.4, "end": 209.4, "text": " And the answer is that we need to use their siblings."}, {"start": 209.4, "end": 213.9, "text": " Now this doesn't mean that Lincoln had a long-lost sibling that we don't know about,"}, {"start": 213.9, "end": 219.0, "text": " what this means is that as the output image is fed through our neural network,"}, {"start": 219.0, "end": 223.0, "text": " we can generate a photorealistic image of someone"}, {"start": 223.0, "end": 227.0, "text": " and this someone kind of resembles the target subject"}, {"start": 227.0, "end": 230.2, "text": " and has all the details filled in."}, {"start": 230.2, "end": 234.0, "text": " Then in the next step, we can start morphing the sibling"}, {"start": 234.0, "end": 237.5, "text": " until it starts resembling the test subject."}, {"start": 237.5, "end": 240.4, "text": " With this previously existing Stagen II technique,"}, {"start": 240.4, "end": 243.0, "text": " morphing is now easy to do,"}, {"start": 243.0, "end": 245.0, "text": " but restoration is hard,"}, {"start": 245.0, "end": 246.6, "text": " so essentially with this,"}, {"start": 246.6, "end": 249.4, "text": " we can skip the difficult restoration part"}, {"start": 249.4, "end": 252.6, "text": " and just do the easier morphing instead,"}, {"start": 252.6, "end": 256.2, "text": " trading a difficult problem for an easier one."}, {"start": 256.2, "end": 258.5, "text": " Absolutely brilliant idea."}, {"start": 258.5, "end": 261.7, "text": " And if you have been holding onto your paper so far,"}, {"start": 261.7, "end": 266.7, "text": " now squeeze that paper because it can do even more."}, {"start": 266.7, "end": 268.3, "text": " Age progression."}, {"start": 268.3, "end": 274.0, "text": " Look, if we only have a few target photos of Thomas Edison throughout his life,"}, {"start": 274.0, "end": 277.9, "text": " these will be our yardsticks and the algorithm is able to generate"}, {"start": 277.9, "end": 281.9, "text": " his aging process between these yardstick images."}, {"start": 281.9, "end": 285.4, "text": " And the best part is that these images have different lighting,"}, {"start": 285.4, "end": 289.3, "text": " pose, and none of this is an issue for the technique."}, {"start": 289.3, "end": 293.6, "text": " It just doesn't care and it still works beautifully."}, {"start": 293.6, "end": 294.9, "text": " Wow."}, {"start": 294.9, "end": 297.8, "text": " So we saw earlier that there are other methods"}, {"start": 297.8, "end": 299.8, "text": " that attempt to do this too,"}, {"start": 299.8, "end": 302.2, "text": " at least the colorization part."}, {"start": 302.2, "end": 305.9, "text": " Yes, we have colorization and other techniques in abundance,"}, {"start": 305.9, "end": 308.59999999999997, "text": " so how does this compare to those?"}, {"start": 308.59999999999997, "end": 312.59999999999997, "text": " It appears to outpace all of them really convincingly."}, {"start": 312.59999999999997, "end": 316.8, "text": " The numbers from the user study and the algorithmically generated scores"}, {"start": 316.8, "end": 318.8, "text": " also favored the new technique."}, {"start": 318.8, "end": 321.2, "text": " This is a huge leap forward."}, {"start": 321.2, "end": 324.9, "text": " Do you have some other applications in mind for this new technique?"}, {"start": 324.9, "end": 327.59999999999997, "text": " Let me know in the comments what you would do with this"}, {"start": 327.59999999999997, "end": 331.59999999999997, "text": " or how you would like to see it improved."}, {"start": 331.6, "end": 334.8, "text": " Now, of course, not even this technique is perfect,"}, {"start": 334.8, "end": 338.8, "text": " blurry and noisy regions can still appear here and there."}, {"start": 338.8, "end": 342.6, "text": " And note that Stagian 2, the basis for this algorithm,"}, {"start": 342.6, "end": 345.70000000000005, "text": " came out just a little more than a year ago."}, {"start": 345.70000000000005, "end": 350.0, "text": " And it is amazing that we are witnessing such incredible progress"}, {"start": 350.0, "end": 351.70000000000005, "text": " in so little time."}, {"start": 351.70000000000005, "end": 353.20000000000005, "text": " My goodness."}, {"start": 353.20000000000005, "end": 357.40000000000003, "text": " And just imagine what the next paper down the line will bring."}, {"start": 357.40000000000003, "end": 359.40000000000003, "text": " What a time to be alive."}, {"start": 359.4, "end": 362.2, "text": " What you see here is a report for a previous paper"}, {"start": 362.2, "end": 366.2, "text": " that we covered in this series which was made by Wades and Biosys."}, {"start": 366.2, "end": 369.4, "text": " Wades and Biosys provides tools to track your experiments"}, {"start": 369.4, "end": 370.9, "text": " in your deep learning projects."}, {"start": 370.9, "end": 374.5, "text": " Their system is designed to save you a ton of time and money"}, {"start": 374.5, "end": 377.79999999999995, "text": " and it is actively used in projects at prestigious labs"}, {"start": 377.79999999999995, "end": 381.79999999999995, "text": " such as OpenAI, Toyota Research, GitHub and more."}, {"start": 381.79999999999995, "end": 384.79999999999995, "text": " And the best part is that Wades and Biosys is free"}, {"start": 384.79999999999995, "end": 388.5, "text": " for all individuals, academics and open source projects."}, {"start": 388.5, "end": 391.1, "text": " It really is as good as it gets."}, {"start": 391.1, "end": 395.1, "text": " Make sure to visit them through wnb.com slash papers"}, {"start": 395.1, "end": 397.9, "text": " or just click the link in the video description"}, {"start": 397.9, "end": 400.0, "text": " and you can get a free demo today."}, {"start": 400.0, "end": 403.1, "text": " Our thanks to Wades and Biosys for their longstanding support"}, {"start": 403.1, "end": 405.9, "text": " and for helping us make better videos for you."}, {"start": 405.9, "end": 408.1, "text": " Thanks for watching and for your generous support"}, {"start": 408.1, "end": 419.1, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ZZ-kORb8grA
This AI Learn To Climb Crazy Terrains! 🤖
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper ALLSTEPS: Curriculum-driven Learning of Stepping Stone skills"" is available here: - https://www.cs.ubc.ca/~van/papers/2020-allsteps/index.html - https://github.com/belinghy/SteppingStone Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord. If you drop by, make sure to write a short introduction if you feel like it! https://discordapp.com/invite/hbcTJu2 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to appear here or pick up other perks, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1696507/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir in 2017 scientists at OpenAI published a paper where virtual humans learned to tackle each other in a sumo competition of sorts and found out how to rock a stable stance to block others from tackling them. This was a super interesting work because it involved self-play or in other words copies of the same AI were playing against each other and the question was how do we pair them with each other to maximize their learning. They found something really remarkable when they asked the algorithm to defeat an older version of itself. If it can reliably pull that off, it will lead to a rapid and predictable learning process. This kind of curriculum-driven learning can supercharge many different kinds of AI's. For instance, this robot from a later paper is essentially blind as it only has proprioceptive sensors which means that the only thing that the robot senses is its own internal state and that's it. No cameras, no depth sensors, no light or nothing. And at first it behaves as we would expect it. Look, when we start out, the agent is very clumsy and can barely walk through a simple terrain. But as time passes, it grows to be a little more confident and with that the terrain also becomes more difficult over time in order to maximize learning. So how potent is this kind of curriculum in teaching the AI? Well, it learned a great deal in the simulation and a scientist deployed it into the real world just look at how well it traversed through this rocky mountain, stream and not even this nightmare-ish snowy descent gave it too much trouble. This new technique proposes a similar curriculum-based approach where we would teach all kinds of virtual life forms to navigate on stepping stones. The examples include a virtual human, a bipedal robot called Cassie and this sphere with toothpick legs too. The authors call it monster, so you know what? Monster it is. The fundamental question here is how do we organize the stepping stones in this virtual environment to deliver the best teaching to this AI? We can freely choose the heights and orientations of the upcoming steps and, of course, it is easier said than done. If the curriculum is too easy, no meaningful learning will take place and if it gets too difficult too quickly, well then in the better case this happens. And in the worst case, whoops! This work proposes an adaptive curriculum that constantly measures how these agents perform and creates challenges that progressively get harder, but in a way that they can be solved by the agents. It can even deal with cases where the AI already knows how to climb up and down and even deal with longer steps. But that does not mean that we are done because if we don't build the Spires right, this happens. But after learning 12 to 24 hours with this adaptive curriculum learning method, they become able to even run, deal with huge step height variations, high-step tilt variations, and let's see if they can pass the hardest exam. Look at this mess, my goodness, lots of variation in every perimeter. And yes, it works. And the key point is that the system is general enough that it can teach different body types to do the same. If there is one thing that you take home from this video, it shouldn't be that it takes from 12 to 24 hours, it should be that the system is general. Normally, if we have a new body type, we need to write a new control algorithm, but in this case, whatever the body type is, we can use the same algorithm to teach it. Absolutely amazing. What a time to be alive. However, I know what you're thinking. Why teach them to navigate just stepping stones? This is such a narrow application of locomotion. So why this task? A great question. And the answer is that the generality of this technique, we just talked about also means that the stepping stone navigation truly was just a stepping stone. And here it is. We can deploy these agents to a continuous terrain and expect them to lean on their stepping stone chops to navigate well here too. Another great triumph for curriculum based AI training environments. So what do you think? What would you use this technique for? Let me know in the comments or if you wish to discuss similar topics with other fellow scholars in a warm and welcoming environment, make sure to join our Discord channel. The link is available in the video description. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000 and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 6.16, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir in 2017"}, {"start": 6.16, "end": 12.4, "text": " scientists at OpenAI published a paper where virtual humans learned to tackle each other"}, {"start": 12.4, "end": 19.84, "text": " in a sumo competition of sorts and found out how to rock a stable stance to block others"}, {"start": 19.84, "end": 27.36, "text": " from tackling them. This was a super interesting work because it involved self-play or in other words"}, {"start": 27.36, "end": 33.36, "text": " copies of the same AI were playing against each other and the question was how do we pair them"}, {"start": 33.36, "end": 39.36, "text": " with each other to maximize their learning. They found something really remarkable when they"}, {"start": 39.36, "end": 45.760000000000005, "text": " asked the algorithm to defeat an older version of itself. If it can reliably pull that off,"}, {"start": 45.760000000000005, "end": 52.4, "text": " it will lead to a rapid and predictable learning process. This kind of curriculum-driven learning"}, {"start": 52.4, "end": 58.48, "text": " can supercharge many different kinds of AI's. For instance, this robot from a later paper is"}, {"start": 58.48, "end": 64.48, "text": " essentially blind as it only has proprioceptive sensors which means that the only thing that the"}, {"start": 64.48, "end": 71.52, "text": " robot senses is its own internal state and that's it. No cameras, no depth sensors, no light"}, {"start": 71.52, "end": 79.75999999999999, "text": " or nothing. And at first it behaves as we would expect it. Look, when we start out, the agent is"}, {"start": 79.76, "end": 87.04, "text": " very clumsy and can barely walk through a simple terrain. But as time passes, it grows to be a"}, {"start": 87.04, "end": 93.44, "text": " little more confident and with that the terrain also becomes more difficult over time in order to"}, {"start": 93.44, "end": 101.28, "text": " maximize learning. So how potent is this kind of curriculum in teaching the AI? Well, it learned"}, {"start": 101.28, "end": 107.60000000000001, "text": " a great deal in the simulation and a scientist deployed it into the real world just look at how"}, {"start": 107.6, "end": 115.19999999999999, "text": " well it traversed through this rocky mountain, stream and not even this nightmare-ish snowy descent"}, {"start": 115.19999999999999, "end": 122.24, "text": " gave it too much trouble. This new technique proposes a similar curriculum-based approach where we"}, {"start": 122.24, "end": 128.56, "text": " would teach all kinds of virtual life forms to navigate on stepping stones. The examples include"}, {"start": 128.56, "end": 136.32, "text": " a virtual human, a bipedal robot called Cassie and this sphere with toothpick legs too."}, {"start": 136.32, "end": 144.72, "text": " The authors call it monster, so you know what? Monster it is. The fundamental question here is"}, {"start": 144.72, "end": 150.16, "text": " how do we organize the stepping stones in this virtual environment to deliver the best teaching"}, {"start": 150.16, "end": 156.95999999999998, "text": " to this AI? We can freely choose the heights and orientations of the upcoming steps and, of course,"}, {"start": 156.95999999999998, "end": 164.16, "text": " it is easier said than done. If the curriculum is too easy, no meaningful learning will take place"}, {"start": 164.16, "end": 173.28, "text": " and if it gets too difficult too quickly, well then in the better case this happens. And in the"}, {"start": 173.28, "end": 183.04, "text": " worst case, whoops! This work proposes an adaptive curriculum that constantly measures how these"}, {"start": 183.04, "end": 189.6, "text": " agents perform and creates challenges that progressively get harder, but in a way that they can be"}, {"start": 189.6, "end": 196.24, "text": " solved by the agents. It can even deal with cases where the AI already knows how to climb up and"}, {"start": 196.24, "end": 203.12, "text": " down and even deal with longer steps. But that does not mean that we are done because if we don't"}, {"start": 203.12, "end": 212.4, "text": " build the Spires right, this happens. But after learning 12 to 24 hours with this adaptive curriculum"}, {"start": 212.4, "end": 220.24, "text": " learning method, they become able to even run, deal with huge step height variations, high-step tilt"}, {"start": 220.24, "end": 228.16, "text": " variations, and let's see if they can pass the hardest exam. Look at this mess, my goodness,"}, {"start": 228.16, "end": 236.56, "text": " lots of variation in every perimeter. And yes, it works. And the key point is that the system is"}, {"start": 236.56, "end": 242.56, "text": " general enough that it can teach different body types to do the same. If there is one thing that"}, {"start": 242.56, "end": 248.8, "text": " you take home from this video, it shouldn't be that it takes from 12 to 24 hours, it should be"}, {"start": 248.8, "end": 255.12, "text": " that the system is general. Normally, if we have a new body type, we need to write a new control"}, {"start": 255.12, "end": 261.76, "text": " algorithm, but in this case, whatever the body type is, we can use the same algorithm to teach it."}, {"start": 261.76, "end": 268.32, "text": " Absolutely amazing. What a time to be alive. However, I know what you're thinking."}, {"start": 268.32, "end": 275.36, "text": " Why teach them to navigate just stepping stones? This is such a narrow application of locomotion."}, {"start": 275.36, "end": 282.08, "text": " So why this task? A great question. And the answer is that the generality of this technique,"}, {"start": 282.08, "end": 288.08, "text": " we just talked about also means that the stepping stone navigation truly was just a stepping stone."}, {"start": 288.08, "end": 295.59999999999997, "text": " And here it is. We can deploy these agents to a continuous terrain and expect them to lean"}, {"start": 295.59999999999997, "end": 301.36, "text": " on their stepping stone chops to navigate well here too. Another great triumph for curriculum"}, {"start": 301.36, "end": 308.32, "text": " based AI training environments. So what do you think? What would you use this technique for? Let me"}, {"start": 308.32, "end": 314.24, "text": " know in the comments or if you wish to discuss similar topics with other fellow scholars in a"}, {"start": 314.24, "end": 319.84000000000003, "text": " warm and welcoming environment, make sure to join our Discord channel. The link is available"}, {"start": 319.84000000000003, "end": 325.36, "text": " in the video description. This episode has been supported by Lambda GPU Cloud. If you're looking"}, {"start": 325.36, "end": 334.0, "text": " for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000,"}, {"start": 334.0, "end": 342.40000000000003, "text": " RTX 8000 and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than"}, {"start": 342.4, "end": 351.76, "text": " half of AWS and Azure. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at"}, {"start": 351.76, "end": 358.79999999999995, "text": " organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers."}, {"start": 358.79999999999995, "end": 364.96, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 364.96, "end": 370.71999999999997, "text": " today. Our thanks to Lambda for their long standing support and for helping us make better videos for"}, {"start": 370.72, "end": 380.72, "text": " you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=o7dqGcLDf0A
These Neural Networks Have Superpowers! 💪
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/ayush-thakur/taming-transformer/reports/-Overview-Taming-Transformers-for-High-Resolution-Image-Synthesis---Vmlldzo0NjEyMTY 📝 The paper "Taming Transformers for High-Resolution Image Synthesis" is available here: https://compvis.github.io/taming-transformers/ Tweet links: Website layout: https://twitter.com/sharifshameem/status/1283322990625607681 Plots: https://twitter.com/aquariusacquah/status/1285415144017797126?s=12 Typesetting math: https://twitter.com/sh_reya/status/1284746918959239168 Population data: https://twitter.com/pavtalk/status/1285410751092416513 Legalese: https://twitter.com/f_j_j_/status/1283848393832333313 Nutrition labels: https://twitter.com/lawderpaul/status/1284972517749338112 User interface design: https://twitter.com/jsngr/status/1284511080715362304 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Carter-Jorna Ifehid. I got so excited by the amazing results of this paper. I will try my best to explain why, and by the end of this video, there will be a comparison that blew me away, and I hope you will appreciate it too. With the rise of neural network-based learning algorithms, we are living the advent of image generation techniques. What you see here is a set of breathtaking results created with a technique called StyleGand2. This can generate images of humans, cars, cats, and more. As you see, the progress in machine learning-based image generation is just stunning. And don't worry for a second about the progress in text processing, because that is also similarly amazing these days. A few months ago, OpenAI published their GPT-3 model that they unleashed to read the internet and learned not just our language, but much, much more. For instance, the internet also contains a lot of computer code, so it learned to generate website layouts from a written description. But that's not all, not even close, to the joy of technical PhD students around the world. It can properly type-set mathematical equations from a plain English description as well. And get this, it can also translate a complex legal text into plain language, or the other way around. And it does many of these things nearly as well as humans. So, what was the key to this work? One of the keys of GPT-3 was that it uses a neural network architecture that is called the Transformer Network. This really took the world by storm in the last few years, so our first question is, why Transformers? One, Transformer Networks can typically learn on stupendously large data sets, like the whole internet, and extract a lot of information from it. That is a very good thing. And two, Transformers are attention-based neural networks, which means that they are good at learning and generating long sequences of data. Ok, but how do we benefit from this? Well, when we ask OpenEAS GPT-3 to continue our sentences, it is able to look back at what we have written previously. And it looks at not just a couple of characters. No, no, it looks at up to several pages of writing backwards to make sure that it continues what we write the best way it can. That sounds amazing. But what is the lesson here? Just use Transformers for everything, and off we go? Well, not quite. They are indeed good at a lot of things when it comes to text processing tasks, but they don't excel at generating high resolution images at all. Can this be improved somehow? Well, this is what this new technique does, and much, much more. So, let's dive in and see what it can do. First, we can give it an incomplete image and ask it to finish it. Not bad, but OpenEAS image GPT could do that too, so what else can it do? Oh boy, a lot more. And by the way, we will compare the results of this technique against image GPT at the end of this video. Make sure not to miss that. I almost fell off the chair, you will see in a moment why. Two, it can do one of my favorites, depth to image generation. We give it a depth map, which is very easy to produce, and it creates a photorealistic image that corresponds to it, which is very hard. We do the easy part, the AI does the hard part. Great. And with this, we not only get a selection of these images, but since we have their depth maps, we can also rotate them around as if they were 3D objects. Nice. Three, we can also give it a map of labels, which is, again, very easy to do. We just say, here goes the sea, put some mountains there, and the sky here, and it will create a beautiful landscape image that corresponds to that. I can't wait to see what amazing artists all over the world will be able to get out of these techniques, and these results are already breathtaking, but research is a process and just imagine how good they will become two more papers down the line. My goodness. Four, it can also perform super resolution. This is the CSI thing where in goes a blurry image and out comes a finer, more detailed version of it. Which craft? And finally, five, we can give it a pose and it generates humans that take these poses. Now, the important thing here is that it can supercharge transformer networks to do these things at the same time with just one technique. So, how does it compare to OpenAI's image completion technique? Well, remember that technique was beyond amazing and set a really high bar. So, let's have a look together. They were both given the upper half of this image and had to fill in the lower half. Remember, as we just learned, transformers are not great at high resolution image synthesis. So, here for OpenAI's image GPT, we expect heavily pixelated images, and, oh yes, that's right. So, now, hold on to your papers and let's see how much more detailed the new technique is. Holy matter of papers. Do you see what I see here? Image GPT came out just a few months ago and there is already this kind of progress. So, there we go. Just imagine what we will be able to do with these supercharge transformers just two more papers down the line. Wow! And that's where I almost fell off the chair when reading this paper. Hope you held on to yours. It truly feels like we are living in a science fiction world. What a time to be alive. What you see here is a report of this exact paper we have talked about which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get the free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Carter-Jorna Ifehid."}, {"start": 4.64, "end": 8.48, "text": " I got so excited by the amazing results of this paper."}, {"start": 8.48, "end": 12.8, "text": " I will try my best to explain why, and by the end of this video,"}, {"start": 12.8, "end": 17.36, "text": " there will be a comparison that blew me away, and I hope you will appreciate it too."}, {"start": 17.92, "end": 21.04, "text": " With the rise of neural network-based learning algorithms,"}, {"start": 21.04, "end": 24.0, "text": " we are living the advent of image generation techniques."}, {"start": 24.48, "end": 29.76, "text": " What you see here is a set of breathtaking results created with a technique called"}, {"start": 29.76, "end": 39.92, "text": " StyleGand2. This can generate images of humans, cars, cats, and more."}, {"start": 39.92, "end": 44.88, "text": " As you see, the progress in machine learning-based image generation is just stunning."}, {"start": 45.52, "end": 49.44, "text": " And don't worry for a second about the progress in text processing,"}, {"start": 49.44, "end": 52.480000000000004, "text": " because that is also similarly amazing these days."}, {"start": 53.2, "end": 59.68000000000001, "text": " A few months ago, OpenAI published their GPT-3 model that they unleashed to read the internet"}, {"start": 59.68, "end": 63.6, "text": " and learned not just our language, but much, much more."}, {"start": 64.08, "end": 67.92, "text": " For instance, the internet also contains a lot of computer code,"}, {"start": 67.92, "end": 71.76, "text": " so it learned to generate website layouts from a written description."}, {"start": 72.96000000000001, "end": 78.88, "text": " But that's not all, not even close, to the joy of technical PhD students around the world."}, {"start": 78.88, "end": 84.16, "text": " It can properly type-set mathematical equations from a plain English description as well."}, {"start": 84.16, "end": 90.08, "text": " And get this, it can also translate a complex legal text into plain language,"}, {"start": 90.96, "end": 92.72, "text": " or the other way around."}, {"start": 93.44, "end": 97.52, "text": " And it does many of these things nearly as well as humans."}, {"start": 98.64, "end": 105.28, "text": " So, what was the key to this work? One of the keys of GPT-3 was that it uses a neural network"}, {"start": 105.28, "end": 110.88, "text": " architecture that is called the Transformer Network. This really took the world by storm in"}, {"start": 110.88, "end": 115.11999999999999, "text": " the last few years, so our first question is, why Transformers?"}, {"start": 116.08, "end": 120.88, "text": " One, Transformer Networks can typically learn on stupendously large data sets,"}, {"start": 120.88, "end": 124.72, "text": " like the whole internet, and extract a lot of information from it."}, {"start": 125.36, "end": 131.51999999999998, "text": " That is a very good thing. And two, Transformers are attention-based neural networks,"}, {"start": 131.51999999999998, "end": 136.8, "text": " which means that they are good at learning and generating long sequences of data."}, {"start": 136.8, "end": 140.32000000000002, "text": " Ok, but how do we benefit from this?"}, {"start": 140.32000000000002, "end": 148.4, "text": " Well, when we ask OpenEAS GPT-3 to continue our sentences, it is able to look back at what we"}, {"start": 148.4, "end": 156.08, "text": " have written previously. And it looks at not just a couple of characters. No, no, it looks at up to"}, {"start": 156.08, "end": 162.0, "text": " several pages of writing backwards to make sure that it continues what we write the best way it can."}, {"start": 162.0, "end": 169.52, "text": " That sounds amazing. But what is the lesson here? Just use Transformers for everything,"}, {"start": 169.52, "end": 176.24, "text": " and off we go? Well, not quite. They are indeed good at a lot of things when it comes to text"}, {"start": 176.24, "end": 181.76, "text": " processing tasks, but they don't excel at generating high resolution images at all."}, {"start": 182.4, "end": 189.76, "text": " Can this be improved somehow? Well, this is what this new technique does, and much, much more."}, {"start": 189.76, "end": 197.28, "text": " So, let's dive in and see what it can do. First, we can give it an incomplete image and ask it to"}, {"start": 197.28, "end": 205.35999999999999, "text": " finish it. Not bad, but OpenEAS image GPT could do that too, so what else can it do?"}, {"start": 206.16, "end": 211.76, "text": " Oh boy, a lot more. And by the way, we will compare the results of this technique"}, {"start": 211.76, "end": 217.92, "text": " against image GPT at the end of this video. Make sure not to miss that. I almost fell off the chair,"}, {"start": 217.92, "end": 224.23999999999998, "text": " you will see in a moment why. Two, it can do one of my favorites, depth to image generation."}, {"start": 224.88, "end": 230.88, "text": " We give it a depth map, which is very easy to produce, and it creates a photorealistic image"}, {"start": 230.88, "end": 237.51999999999998, "text": " that corresponds to it, which is very hard. We do the easy part, the AI does the hard part."}, {"start": 238.16, "end": 243.92, "text": " Great. And with this, we not only get a selection of these images, but since we have their"}, {"start": 243.92, "end": 248.95999999999998, "text": " depth maps, we can also rotate them around as if they were 3D objects."}, {"start": 251.76, "end": 260.08, "text": " Nice. Three, we can also give it a map of labels, which is, again, very easy to do. We just say,"}, {"start": 260.08, "end": 266.32, "text": " here goes the sea, put some mountains there, and the sky here, and it will create a beautiful"}, {"start": 266.32, "end": 272.96, "text": " landscape image that corresponds to that. I can't wait to see what amazing artists all over the"}, {"start": 272.96, "end": 279.28, "text": " world will be able to get out of these techniques, and these results are already breathtaking,"}, {"start": 279.28, "end": 285.84, "text": " but research is a process and just imagine how good they will become two more papers down the line."}, {"start": 286.64, "end": 292.96, "text": " My goodness. Four, it can also perform super resolution. This is the CSI thing where"}, {"start": 292.96, "end": 297.84, "text": " in goes a blurry image and out comes a finer, more detailed version of it."}, {"start": 297.84, "end": 307.11999999999995, "text": " Which craft? And finally, five, we can give it a pose and it generates humans that take these poses."}, {"start": 307.91999999999996, "end": 313.67999999999995, "text": " Now, the important thing here is that it can supercharge transformer networks to do these things"}, {"start": 313.67999999999995, "end": 321.03999999999996, "text": " at the same time with just one technique. So, how does it compare to OpenAI's image completion"}, {"start": 321.04, "end": 328.48, "text": " technique? Well, remember that technique was beyond amazing and set a really high bar. So,"}, {"start": 328.48, "end": 334.40000000000003, "text": " let's have a look together. They were both given the upper half of this image and had to fill in"}, {"start": 334.40000000000003, "end": 340.8, "text": " the lower half. Remember, as we just learned, transformers are not great at high resolution image"}, {"start": 340.8, "end": 348.72, "text": " synthesis. So, here for OpenAI's image GPT, we expect heavily pixelated images, and,"}, {"start": 348.72, "end": 356.72, "text": " oh yes, that's right. So, now, hold on to your papers and let's see how much more detailed the"}, {"start": 356.72, "end": 367.36, "text": " new technique is. Holy matter of papers. Do you see what I see here? Image GPT came out just a few"}, {"start": 367.36, "end": 374.08000000000004, "text": " months ago and there is already this kind of progress. So, there we go. Just imagine what we will"}, {"start": 374.08, "end": 380.08, "text": " be able to do with these supercharge transformers just two more papers down the line. Wow!"}, {"start": 380.88, "end": 386.0, "text": " And that's where I almost fell off the chair when reading this paper. Hope you held on to yours."}, {"start": 386.56, "end": 392.08, "text": " It truly feels like we are living in a science fiction world. What a time to be alive."}, {"start": 392.08, "end": 397.03999999999996, "text": " What you see here is a report of this exact paper we have talked about which was made by"}, {"start": 397.03999999999996, "end": 402.24, "text": " weights and biases. I put a link to it in the description. Make sure to have a look. I think"}, {"start": 402.24, "end": 408.08, "text": " it helps you understand this paper better. Weight and biases provides tools to track your experiments"}, {"start": 408.08, "end": 413.2, "text": " in your deep learning projects. Their system is designed to save you a ton of time and money,"}, {"start": 413.2, "end": 419.68, "text": " and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub,"}, {"start": 419.68, "end": 424.88, "text": " and more. And the best part is that weights and biases is free for all individuals,"}, {"start": 424.88, "end": 431.04, "text": " academics, and open source projects. It really is as good as it gets. Make sure to visit them"}, {"start": 431.04, "end": 437.52000000000004, "text": " through wnb.com slash papers or just click the link in the video description and you can get the"}, {"start": 437.52000000000004, "end": 442.72, "text": " free demo today. Our thanks to weights and biases for their longstanding support and for helping"}, {"start": 442.72, "end": 463.68, "text": " us make better videos for you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=IUg-t609byg
Mind Reading For Brain-To-Text Communication! 🧠
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "High-performance brain-to-text communication via imagined handwriting" is available here: https://www.biorxiv.org/content/10.1101/2020.07.01.183384v1.full 📝 Our earlier video on Neuralink: https://www.youtube.com/watch?v=JKe53bcyBQY ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #braintotext #mindwriting
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Today, we are going to read some minds. A few months ago, our first video appeared on brain-machine interfaces. It was about a paper from Neuralink, which promised a great deal. For instance, they proposed a brain-machine interface that could read these peaks' thoughts as it was running on the treadmill. And what's more, they not only read its thoughts, but they could also predict what the peak's brain is about to do. So, this was about reading thoughts related to movement. And to be able to use these brain-machine interfaces to the fullest, they should be able to enable some sort of communication for people who have lost their ability to move or speak. So, what about writing or speaking? As impossible as it sounds, can we somehow restore that, or is that still science fiction? Many of you told me that you would like to hear more about this topic, so, due to popular requests, here it is, a look beyond Neuralink's project to see what else is out there. This is a collaboration between Stanford University and a selection of other institutions, and it allows brain-to-text transcription, where all the test subject has to do is imagine writing the letters, and they magically appear on the screen. And now, start holding on to your papers and just look at how quickly it goes. Ninety characters per minute, with over 94% of accuracy, which can be improved to over 99% accuracy with an additional auto-correct technique. That is absolutely amazing, a true miracle. Ninety characters per minute means that the test subject here, who has a paralyzed hand, can almost think about writing these letters continuously, and most of them are decoded and put on the screen in less than a second. Also, wait a second, 90 characters per minute, that is about 80% as fast as the average typing speed on a smartphone screen for an able-bodied person of this age group. Whoa! It is quite remarkable that even years after paralysis, the motor cortex is still strong enough to be read by a brain computer interface well enough for such typing speed and accuracy. It truly feels like we are living in a science fiction world. Of course, not even this technique is perfect, it has its own limitations. For instance, we can't edit or delete text. Have no access to capital letters, and the method has a calibration step that takes a long time, although it doesn't get significantly worse if we shorten this calibration time a bit. So, how does this work? First the participant starts thinking of writing one letter at a time. Here you see the recorded neural activity, this is subject to decoding. You can see the decoded signals here. And we can just give this to a computer to distinguish between them as is, we project these into a 2D latent space where it is easy to find which letter corresponds to which region. Look, they form relatively tight clusters, therefore it is now easy to decide which of the squiggles corresponds to which letter. The decoding part is done using a recurrent neural network which is in the out with memory and can deal with sequences of data. So here, in goes the brain activity and out comes the decision that says which character these activities correspond to. Of course, our alphabet was not designed to be decoded with neural networks. So here is an almost science fiction-like question. How do we reformulate the alphabet to tailor it to maximize the efficiency of a neural network decoding our thoughts? Or simpler, what would the alphabet look like if neural networks were in charge? Well, this paper has an answer to that too, so let's have a look. The squiggles indeed look like they came from another planet, so what do we gain from this? Well, look at the distance matrix for the English alphabet. The diagonal is supposed to be very blue, but what is not supposed to be blue at all are the regions that surround it. Look, the blue color here means that in the English alphabet the letters M and N can be relatively easily confused, same with the letters O and C and there are many more similarities. And now, look, here is the same distance matrix for the optimized alphabet. No dark blue inside outside the diagonal. Much easier to process and decode. If neural networks were in charge, this is what the alphabet would look like. Glorious. Also, the fact that we are talking about squiggles is not a trivial insight at all, traditional methods typically rely on movement in straight lines to select letters and buttons. The other key thought in this paper is that modern neural network-based methods can decode these thoughts of squiggles reliably. That is absolutely amazing. And wait a second, note that there is only one participant in the user study. Why just one participant? Why not calling a bunch of people to test this? It is because this method uses a microelectro-array and this requires surgery to insert and because of that, these studies are difficult to perform and are usually done at times when the participant has brain surgery anyways for other reasons. Having more people in the study is usually prohibitively expensive if at all possible for this kind of brain implant. And note that research is a process and these papers are stepping stones. And now we are able to help people write 90 characters every minute with a brain machine interface and I can only imagine how good these techniques will become two more papers down the line. And don't forget there are research works on non-invasive devices too. So what do you think? Let me know in the comments below. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances, workstations or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 7.62, "text": " Today, we are going to read some minds."}, {"start": 7.62, "end": 12.4, "text": " A few months ago, our first video appeared on brain-machine interfaces."}, {"start": 12.4, "end": 16.88, "text": " It was about a paper from Neuralink, which promised a great deal."}, {"start": 16.88, "end": 22.46, "text": " For instance, they proposed a brain-machine interface that could read these peaks' thoughts"}, {"start": 22.46, "end": 24.8, "text": " as it was running on the treadmill."}, {"start": 24.8, "end": 29.76, "text": " And what's more, they not only read its thoughts, but they could also predict what the"}, {"start": 29.76, "end": 32.160000000000004, "text": " peak's brain is about to do."}, {"start": 32.160000000000004, "end": 36.36, "text": " So, this was about reading thoughts related to movement."}, {"start": 36.36, "end": 41.480000000000004, "text": " And to be able to use these brain-machine interfaces to the fullest, they should be able"}, {"start": 41.480000000000004, "end": 48.6, "text": " to enable some sort of communication for people who have lost their ability to move or speak."}, {"start": 48.6, "end": 52.72, "text": " So, what about writing or speaking?"}, {"start": 52.72, "end": 59.68, "text": " As impossible as it sounds, can we somehow restore that, or is that still science fiction?"}, {"start": 59.68, "end": 63.94, "text": " Many of you told me that you would like to hear more about this topic, so, due to"}, {"start": 63.94, "end": 70.16, "text": " popular requests, here it is, a look beyond Neuralink's project to see what else is out"}, {"start": 70.16, "end": 71.48, "text": " there."}, {"start": 71.48, "end": 77.32, "text": " This is a collaboration between Stanford University and a selection of other institutions, and"}, {"start": 77.32, "end": 83.63999999999999, "text": " it allows brain-to-text transcription, where all the test subject has to do is imagine"}, {"start": 83.63999999999999, "end": 88.67999999999999, "text": " writing the letters, and they magically appear on the screen."}, {"start": 88.67999999999999, "end": 94.56, "text": " And now, start holding on to your papers and just look at how quickly it goes."}, {"start": 94.56, "end": 100.8, "text": " Ninety characters per minute, with over 94% of accuracy, which can be improved to over"}, {"start": 100.8, "end": 105.96, "text": " 99% accuracy with an additional auto-correct technique."}, {"start": 105.96, "end": 110.03999999999999, "text": " That is absolutely amazing, a true miracle."}, {"start": 110.03999999999999, "end": 115.6, "text": " Ninety characters per minute means that the test subject here, who has a paralyzed hand,"}, {"start": 115.6, "end": 121.52, "text": " can almost think about writing these letters continuously, and most of them are decoded"}, {"start": 121.52, "end": 125.03999999999999, "text": " and put on the screen in less than a second."}, {"start": 125.03999999999999, "end": 133.32, "text": " Also, wait a second, 90 characters per minute, that is about 80% as fast as the average typing"}, {"start": 133.32, "end": 138.84, "text": " speed on a smartphone screen for an able-bodied person of this age group."}, {"start": 138.84, "end": 140.84, "text": " Whoa!"}, {"start": 140.84, "end": 147.16, "text": " It is quite remarkable that even years after paralysis, the motor cortex is still strong enough"}, {"start": 147.16, "end": 153.68, "text": " to be read by a brain computer interface well enough for such typing speed and accuracy."}, {"start": 153.68, "end": 157.84, "text": " It truly feels like we are living in a science fiction world."}, {"start": 157.84, "end": 162.32, "text": " Of course, not even this technique is perfect, it has its own limitations."}, {"start": 162.32, "end": 166.04, "text": " For instance, we can't edit or delete text."}, {"start": 166.04, "end": 171.48, "text": " Have no access to capital letters, and the method has a calibration step that takes a long"}, {"start": 171.48, "end": 176.92, "text": " time, although it doesn't get significantly worse if we shorten this calibration time"}, {"start": 176.92, "end": 178.23999999999998, "text": " a bit."}, {"start": 178.23999999999998, "end": 181.48, "text": " So, how does this work?"}, {"start": 181.48, "end": 187.0, "text": " First the participant starts thinking of writing one letter at a time."}, {"start": 187.0, "end": 192.12, "text": " Here you see the recorded neural activity, this is subject to decoding."}, {"start": 192.12, "end": 196.68, "text": " You can see the decoded signals here."}, {"start": 196.68, "end": 201.44, "text": " And we can just give this to a computer to distinguish between them as is, we project"}, {"start": 201.44, "end": 207.44, "text": " these into a 2D latent space where it is easy to find which letter corresponds to which"}, {"start": 207.44, "end": 208.44, "text": " region."}, {"start": 208.44, "end": 215.8, "text": " Look, they form relatively tight clusters, therefore it is now easy to decide which of the squiggles"}, {"start": 215.8, "end": 218.56, "text": " corresponds to which letter."}, {"start": 218.56, "end": 224.12, "text": " The decoding part is done using a recurrent neural network which is in the out with memory"}, {"start": 224.12, "end": 227.48, "text": " and can deal with sequences of data."}, {"start": 227.48, "end": 233.36, "text": " So here, in goes the brain activity and out comes the decision that says which character"}, {"start": 233.36, "end": 236.2, "text": " these activities correspond to."}, {"start": 236.2, "end": 241.28, "text": " Of course, our alphabet was not designed to be decoded with neural networks."}, {"start": 241.28, "end": 245.0, "text": " So here is an almost science fiction-like question."}, {"start": 245.0, "end": 250.36, "text": " How do we reformulate the alphabet to tailor it to maximize the efficiency of a neural"}, {"start": 250.36, "end": 253.16, "text": " network decoding our thoughts?"}, {"start": 253.16, "end": 258.76, "text": " Or simpler, what would the alphabet look like if neural networks were in charge?"}, {"start": 258.76, "end": 263.96, "text": " Well, this paper has an answer to that too, so let's have a look."}, {"start": 263.96, "end": 269.44, "text": " The squiggles indeed look like they came from another planet, so what do we gain from"}, {"start": 269.44, "end": 270.44, "text": " this?"}, {"start": 270.44, "end": 274.4, "text": " Well, look at the distance matrix for the English alphabet."}, {"start": 274.4, "end": 281.12, "text": " The diagonal is supposed to be very blue, but what is not supposed to be blue at all"}, {"start": 281.12, "end": 283.23999999999995, "text": " are the regions that surround it."}, {"start": 283.23999999999995, "end": 290.28, "text": " Look, the blue color here means that in the English alphabet the letters M and N can be"}, {"start": 290.28, "end": 297.84, "text": " relatively easily confused, same with the letters O and C and there are many more similarities."}, {"start": 297.84, "end": 304.35999999999996, "text": " And now, look, here is the same distance matrix for the optimized alphabet."}, {"start": 304.36, "end": 307.8, "text": " No dark blue inside outside the diagonal."}, {"start": 307.8, "end": 310.48, "text": " Much easier to process and decode."}, {"start": 310.48, "end": 315.08000000000004, "text": " If neural networks were in charge, this is what the alphabet would look like."}, {"start": 315.08000000000004, "end": 316.08000000000004, "text": " Glorious."}, {"start": 316.08000000000004, "end": 321.8, "text": " Also, the fact that we are talking about squiggles is not a trivial insight at all, traditional"}, {"start": 321.8, "end": 328.40000000000003, "text": " methods typically rely on movement in straight lines to select letters and buttons."}, {"start": 328.40000000000003, "end": 333.68, "text": " The other key thought in this paper is that modern neural network-based methods can decode"}, {"start": 333.68, "end": 336.6, "text": " these thoughts of squiggles reliably."}, {"start": 336.6, "end": 339.2, "text": " That is absolutely amazing."}, {"start": 339.2, "end": 345.88, "text": " And wait a second, note that there is only one participant in the user study."}, {"start": 345.88, "end": 347.8, "text": " Why just one participant?"}, {"start": 347.8, "end": 351.28000000000003, "text": " Why not calling a bunch of people to test this?"}, {"start": 351.28000000000003, "end": 358.04, "text": " It is because this method uses a microelectro-array and this requires surgery to insert and because"}, {"start": 358.04, "end": 363.44, "text": " of that, these studies are difficult to perform and are usually done at times when the"}, {"start": 363.44, "end": 367.56, "text": " participant has brain surgery anyways for other reasons."}, {"start": 367.56, "end": 372.84, "text": " Having more people in the study is usually prohibitively expensive if at all possible for"}, {"start": 372.84, "end": 375.04, "text": " this kind of brain implant."}, {"start": 375.04, "end": 379.8, "text": " And note that research is a process and these papers are stepping stones."}, {"start": 379.8, "end": 385.28, "text": " And now we are able to help people write 90 characters every minute with a brain machine"}, {"start": 385.28, "end": 390.84, "text": " interface and I can only imagine how good these techniques will become two more papers"}, {"start": 390.84, "end": 392.28, "text": " down the line."}, {"start": 392.28, "end": 396.67999999999995, "text": " And don't forget there are research works on non-invasive devices too."}, {"start": 396.67999999999995, "end": 398.35999999999996, "text": " So what do you think?"}, {"start": 398.35999999999996, "end": 400.23999999999995, "text": " Let me know in the comments below."}, {"start": 400.23999999999995, "end": 401.96, "text": " What a time to be alive!"}, {"start": 401.96, "end": 405.44, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 405.44, "end": 411.4, "text": " If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 411.4, "end": 419.32, "text": " They recently launched Quadro RTX 6000, RTX 8000 and V100 instances and hold onto your"}, {"start": 419.32, "end": 425.71999999999997, "text": " papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 425.71999999999997, "end": 431.28, "text": " Plus they are the only Cloud service with 48GB RTX 8000."}, {"start": 431.28, "end": 437.68, "text": " Join researchers at organizations like Apple, MIT and Caltech in using Lambda Cloud instances,"}, {"start": 437.68, "end": 439.48, "text": " workstations or servers."}, {"start": 439.48, "end": 445.48, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances"}, {"start": 445.48, "end": 446.48, "text": " today."}, {"start": 446.48, "end": 451.16, "text": " Thanks to Lambda for their long-standing support and for helping us make better videos for"}, {"start": 451.16, "end": 452.16, "text": " you."}, {"start": 452.16, "end": 479.6, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=MbZ0ld1ShFo
Perfect Virtual Hands - But At A Cost! 👐
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Constraining Dense Hand Surface Tracking with Elasticity" is available here: https://research.fb.com/publications/constraining-dense-hand-surface-tracking-with-elasticity/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #vr #metaverse
Dear Fellow Scholars, this is two minute tapers with Dr. Káloj Zsolnai-Fehér. The promise of virtual reality VR is indeed truly incredible. If one day it comes to fruition, doctors could be trained to perform surgery in a virtual environment, we could train better pilots with better flight simulators, expose astronauts to virtual zero gravity simulations, you name it. This previous work uses a learning based algorithm to teach a head-mounted camera to tell the orientation of our hands at all times. Okay, so what can we do with this? A great deal. For instance, we can type on a virtual keyboard or implement all kinds of virtual user interfaces that we can interact with. We can also organize imaginary boxes and of course we can't leave out the two minute papers favorite, going into a physics simulation and playing it with our own hands. But of course, not everything is perfect here, however. Look, hand-hand interactions don't work so well, so Fox who prefer virtual reality applications that include washing our hands should look elsewhere. And in this series we often say one more paper down the line and it will be significantly better. So now here's the moment of truth, let's see that one more paper down the line. Let's go in, guns blazing and give it examples with challenging hand-hand interactions, deformations, lots of self-contact and self-occlusion. Take a look at this footage. This seems like a nightmare for any hand reconstruction algorithm. Who the heck can solve this? And look, interestingly they also recorded the hand model with gloves as well. How curious! And now hold on to your papers because these are not gloves. No no no, what you see here is the reconstruction of the hand model by a new algorithm. Look, it can deal with all of these rapid hand motions and what's more, it also works on this challenging hand massage scene. Look at all those beautiful details. It not only fits like a glove here too, but I see creases, folds and deformations too. This reconstruction is truly out of this world. To be able to do this, the algorithm has to output triangle meshes that typically contain over a hundred thousand faces. Please remember this as we will talk about it later. And now let's see how it does all this magic because there's plenty of magic under the hood. We get five ingredients that are paramount to getting an output of this quality. Ingredient number one is the physics term. Without it we can't even dream of tracking self-occlusion and contact properly. Two, since there are plenty of deformations going on in the input footage, the deformation term accounts for that. It makes a huge difference in the reconstruction of the thumb here. And if you think, wow, that is horrific, then you'll need to hold onto your papers for the next one, which is three, the geometric consistency term. This one is not for the faint of the heart, you have been warned. Are you ready? Let's go. Yikes! A piece of advice, if you decide to implement this technique, make sure to include the geometric consistency term so no one has to see this footage ever again. Thank you. With the worst already behind us, let's proceed to ingredient number four, the photo consistency term. This ensures that fingernail tips don't end up sliding into the finger. And five, the collision term fixes problems like this to make sure that the fingers don't penetrate each other. And this is an excellent paper, so in the evaluation section, these terms are also tested in isolation, and the authors tell us exactly how much each of these ingredients contribute to the solution. Now, these five ingredients are not cheap in terms of computation time, and remember, we also mentioned that many of these meshes have several hundred thousand faces. This means that this technique takes a very long time to compute all this. It is not real time, not even close, for instance, reconstructing the mesh for the hand massage scene takes more than ten minutes per frame. This means hours, or even days of computation, to accomplish this. Now the question naturally arises, is that a problem? No, not in the slightest. This is a zero to one paper, which means that it takes a problem that was previously impossible, and now it makes it possible. That is absolutely amazing. And as always, research is a process, and this is an important stepping stone in this process. I bet that two more good papers down the line, and we will be getting these gloves interactively. I am so happy about this solution, as it could finally give us no ways to interact with each other in virtual spaces, add more realism to digital characters, help us better understand human human interactions, and it may also enable new applications in physical rehabilitation. And these reconstructions, indeed, fit these tasks like a glove. What a time to be alive! Perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables, and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. And with perceptilebs.com, slash papers to easily install the free local version of their system today. Thanks to perceptilebs for their support, and for helping us make better videos for you. Thank you for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two minute tapers with Dr. K\u00e1loj Zsolnai-Feh\u00e9r."}, {"start": 4.88, "end": 10.120000000000001, "text": " The promise of virtual reality VR is indeed truly incredible."}, {"start": 10.120000000000001, "end": 15.32, "text": " If one day it comes to fruition, doctors could be trained to perform surgery in a virtual"}, {"start": 15.32, "end": 21.44, "text": " environment, we could train better pilots with better flight simulators, expose astronauts"}, {"start": 21.44, "end": 25.64, "text": " to virtual zero gravity simulations, you name it."}, {"start": 25.64, "end": 31.36, "text": " This previous work uses a learning based algorithm to teach a head-mounted camera to tell the"}, {"start": 31.36, "end": 34.56, "text": " orientation of our hands at all times."}, {"start": 34.56, "end": 37.56, "text": " Okay, so what can we do with this?"}, {"start": 37.56, "end": 38.88, "text": " A great deal."}, {"start": 38.88, "end": 44.92, "text": " For instance, we can type on a virtual keyboard or implement all kinds of virtual user interfaces"}, {"start": 44.92, "end": 47.08, "text": " that we can interact with."}, {"start": 47.08, "end": 52.44, "text": " We can also organize imaginary boxes and of course we can't leave out the two minute"}, {"start": 52.44, "end": 58.36, "text": " papers favorite, going into a physics simulation and playing it with our own hands."}, {"start": 58.36, "end": 62.16, "text": " But of course, not everything is perfect here, however."}, {"start": 62.16, "end": 67.6, "text": " Look, hand-hand interactions don't work so well, so Fox who prefer virtual reality"}, {"start": 67.6, "end": 72.28, "text": " applications that include washing our hands should look elsewhere."}, {"start": 72.28, "end": 77.28, "text": " And in this series we often say one more paper down the line and it will be significantly"}, {"start": 77.28, "end": 78.28, "text": " better."}, {"start": 78.28, "end": 84.08, "text": " So now here's the moment of truth, let's see that one more paper down the line."}, {"start": 84.08, "end": 90.0, "text": " Let's go in, guns blazing and give it examples with challenging hand-hand interactions,"}, {"start": 90.0, "end": 94.6, "text": " deformations, lots of self-contact and self-occlusion."}, {"start": 94.6, "end": 96.8, "text": " Take a look at this footage."}, {"start": 96.8, "end": 101.28, "text": " This seems like a nightmare for any hand reconstruction algorithm."}, {"start": 101.28, "end": 103.8, "text": " Who the heck can solve this?"}, {"start": 103.8, "end": 109.47999999999999, "text": " And look, interestingly they also recorded the hand model with gloves as well."}, {"start": 109.47999999999999, "end": 111.52, "text": " How curious!"}, {"start": 111.52, "end": 115.6, "text": " And now hold on to your papers because these are not gloves."}, {"start": 115.6, "end": 121.84, "text": " No no no, what you see here is the reconstruction of the hand model by a new algorithm."}, {"start": 121.84, "end": 128.16, "text": " Look, it can deal with all of these rapid hand motions and what's more, it also works"}, {"start": 128.16, "end": 131.07999999999998, "text": " on this challenging hand massage scene."}, {"start": 131.07999999999998, "end": 133.51999999999998, "text": " Look at all those beautiful details."}, {"start": 133.52, "end": 140.04000000000002, "text": " It not only fits like a glove here too, but I see creases, folds and deformations too."}, {"start": 140.04000000000002, "end": 143.64000000000001, "text": " This reconstruction is truly out of this world."}, {"start": 143.64000000000001, "end": 149.16000000000003, "text": " To be able to do this, the algorithm has to output triangle meshes that typically contain"}, {"start": 149.16000000000003, "end": 151.92000000000002, "text": " over a hundred thousand faces."}, {"start": 151.92000000000002, "end": 155.68, "text": " Please remember this as we will talk about it later."}, {"start": 155.68, "end": 160.52, "text": " And now let's see how it does all this magic because there's plenty of magic under the"}, {"start": 160.52, "end": 161.52, "text": " hood."}, {"start": 161.52, "end": 167.28, "text": " We get five ingredients that are paramount to getting an output of this quality."}, {"start": 167.28, "end": 170.4, "text": " Ingredient number one is the physics term."}, {"start": 170.4, "end": 175.60000000000002, "text": " Without it we can't even dream of tracking self-occlusion and contact properly."}, {"start": 175.60000000000002, "end": 180.72, "text": " Two, since there are plenty of deformations going on in the input footage, the deformation"}, {"start": 180.72, "end": 183.0, "text": " term accounts for that."}, {"start": 183.0, "end": 187.20000000000002, "text": " It makes a huge difference in the reconstruction of the thumb here."}, {"start": 187.2, "end": 192.11999999999998, "text": " And if you think, wow, that is horrific, then you'll need to hold onto your papers for"}, {"start": 192.11999999999998, "end": 197.23999999999998, "text": " the next one, which is three, the geometric consistency term."}, {"start": 197.23999999999998, "end": 201.95999999999998, "text": " This one is not for the faint of the heart, you have been warned."}, {"start": 201.95999999999998, "end": 202.95999999999998, "text": " Are you ready?"}, {"start": 202.95999999999998, "end": 204.95999999999998, "text": " Let's go."}, {"start": 204.95999999999998, "end": 205.95999999999998, "text": " Yikes!"}, {"start": 205.95999999999998, "end": 213.04, "text": " A piece of advice, if you decide to implement this technique, make sure to include the"}, {"start": 213.04, "end": 218.6, "text": " geometric consistency term so no one has to see this footage ever again."}, {"start": 218.6, "end": 219.92, "text": " Thank you."}, {"start": 219.92, "end": 225.95999999999998, "text": " With the worst already behind us, let's proceed to ingredient number four, the photo consistency"}, {"start": 225.95999999999998, "end": 226.95999999999998, "text": " term."}, {"start": 226.95999999999998, "end": 233.76, "text": " This ensures that fingernail tips don't end up sliding into the finger."}, {"start": 233.76, "end": 239.04, "text": " And five, the collision term fixes problems like this to make sure that the fingers don't"}, {"start": 239.04, "end": 241.76, "text": " penetrate each other."}, {"start": 241.76, "end": 247.84, "text": " And this is an excellent paper, so in the evaluation section, these terms are also tested"}, {"start": 247.84, "end": 253.84, "text": " in isolation, and the authors tell us exactly how much each of these ingredients contribute"}, {"start": 253.84, "end": 255.35999999999999, "text": " to the solution."}, {"start": 255.35999999999999, "end": 260.76, "text": " Now, these five ingredients are not cheap in terms of computation time, and remember,"}, {"start": 260.76, "end": 265.84, "text": " we also mentioned that many of these meshes have several hundred thousand faces."}, {"start": 265.84, "end": 270.52, "text": " This means that this technique takes a very long time to compute all this."}, {"start": 270.52, "end": 275.64, "text": " It is not real time, not even close, for instance, reconstructing the mesh for the hand"}, {"start": 275.64, "end": 280.0, "text": " massage scene takes more than ten minutes per frame."}, {"start": 280.0, "end": 285.12, "text": " This means hours, or even days of computation, to accomplish this."}, {"start": 285.12, "end": 289.52, "text": " Now the question naturally arises, is that a problem?"}, {"start": 289.52, "end": 292.4, "text": " No, not in the slightest."}, {"start": 292.4, "end": 297.12, "text": " This is a zero to one paper, which means that it takes a problem that was previously"}, {"start": 297.12, "end": 300.84000000000003, "text": " impossible, and now it makes it possible."}, {"start": 300.84000000000003, "end": 303.28000000000003, "text": " That is absolutely amazing."}, {"start": 303.28000000000003, "end": 309.16, "text": " And as always, research is a process, and this is an important stepping stone in this process."}, {"start": 309.16, "end": 315.76, "text": " I bet that two more good papers down the line, and we will be getting these gloves interactively."}, {"start": 315.76, "end": 321.04, "text": " I am so happy about this solution, as it could finally give us no ways to interact with"}, {"start": 321.04, "end": 327.64000000000004, "text": " each other in virtual spaces, add more realism to digital characters, help us better understand"}, {"start": 327.64000000000004, "end": 334.84000000000003, "text": " human human interactions, and it may also enable new applications in physical rehabilitation."}, {"start": 334.84000000000003, "end": 339.24, "text": " And these reconstructions, indeed, fit these tasks like a glove."}, {"start": 339.24, "end": 341.12, "text": " What a time to be alive!"}, {"start": 341.12, "end": 346.6, "text": " Perceptilebs is a visual API for TensorFlow, carefully designed to make machine learning as"}, {"start": 346.6, "end": 348.68, "text": " intuitive as possible."}, {"start": 348.68, "end": 353.56, "text": " This gives you a faster way to build out models with more transparency into how your model"}, {"start": 353.56, "end": 357.84000000000003, "text": " is architected, how it performs, and how to debug it."}, {"start": 357.84000000000003, "end": 362.48, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 362.48, "end": 367.52, "text": " It even generates visualizations for all the model variables, and gives you recommendations"}, {"start": 367.52, "end": 372.24, "text": " both during modeling and training, and does all this automatically."}, {"start": 372.24, "end": 376.64, "text": " I only wish I had a tool like this when I was working on my neural networks during"}, {"start": 376.64, "end": 378.48, "text": " my PhD years."}, {"start": 378.48, "end": 383.6, "text": " And with perceptilebs.com, slash papers to easily install the free local version of their"}, {"start": 383.6, "end": 384.6, "text": " system today."}, {"start": 384.6, "end": 389.64000000000004, "text": " Thanks to perceptilebs for their support, and for helping us make better videos for"}, {"start": 389.64000000000004, "end": 390.64000000000004, "text": " you."}, {"start": 390.64, "end": 417.4, "text": " Thank you for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=JSNE_PIG1UQ
7 Years of Progress In Snow Simulation! ❄️
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "An Implicit Compressible SPH Solver for Snow Simulation" is available here: https://cg.informatik.uni-freiburg.de/publications/2020_SIGGRAPH_snow.pdf ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-2522720/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is two minute papers with Dr. Kato Zsolnai-Fehir. Let's talk about snow simulations. Being able to simulate snow on our computers is not new. It's been possible for a few years now. For instance, this legendary Disney paper from 2013 was capable of performing that. So, why do researchers write new papers on this topic? Well, because the 2013 paper had its limitations. Let's talk about them while looking at some really sweet footage from a new paper. Limitation number one for previous works is that there's no simulations were sinfully expensive. Most of them took not seconds and sometimes not even minutes, but half an hour per frame. Yes, that means all nighter simulations. They were also not as good in fracturing interactions, simulating powder snow and avalanches were also out of reach. Until now, you see, simulating snow is quite challenging. It clamps up, deforms, breaks and hardens under compression. And even my favorite phase change from fluid to snow. These are all really challenging to simulate properly, and in a moment you will see that this new method is capable of even more beyond that. Let's start with friction. First, we turn on the snow machine, and then engage the wipers. That looks wonderful. And now, may I request seeing some tire marks? There you go. This looks really good, so how about taking a closer look at this phenomenon? VB here is the boundary friction coefficient, and it is a parameter that can be chosen freely by us. So let's see what that looks like. If we initialize this to a low value, we'll get very little friction. And if we crank this up, look, things get a great deal more sticky. A big clump of snow also breaks apart in a spectacular manner, also showcasing compression and fracturing, beyond the boundary friction effect we just looked at. Oh my, this is really good. Oh my, this is beautiful. Okay, now, this is a computer graphics paper. If you're a seasoned fellow scholar, you know what this means. It means that it is time to put some virtual bunnies into the oven. This is a great example for rule number one, for watching physics simulations, which is that we discussed the physics part, and not the visuals. So why are these bunnies blue? Well, let's chuck them up. Well, let's chuck them into the oven and find out. Aha, they are color coded for temperature. Look, they start from minus 100 Celsius, that's the blue, and we see the colors change as they approach 0 degrees Celsius. At this point, they don't yet start melting, but they are already falling apart. So it was indeed a good design decision to show the temperatures, because it tells us exactly what is going on here. Without this, we would be expecting melting. Well, can we see that melting in action too? You bet. Now, hold on to your papers, and bring forth the soft isomet. This machine can not only create an exquisite dessert for computer graphics researchers, but also showcases the individual contributions of this new technique, one by one. There is the melting. Yes. Add a little frosting, and there you go. Bon appétit. Now, as we feared, many of these larger-scale simulations require computing the physics for millions of particles. So how long does that take? When we need millions of particles, we typically have to wait a few minutes per frame, but if we have a smaller scene, we can get away with these computations in a few seconds per frame. Goodness. We went from hours per frame to seconds per frame in just one paper. Outstanding work. And also, wait a second. If we are talking millions of particles, I wonder how much memory it takes to keep track of them. Let's see. Whoa. This is very appealing. I was expecting a few gigabytes, yet it only asks for a fraction of it, a couple hundred megabytes. So, with this hefty value proposition, it is no wonder that this paper has been accepted to the C-Graph conference. This is the Olympic Gold Medal of Computer Graphics Research, if you will. Huge congratulations to the authors of this paper. This was quite an experience. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers, because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48 GB, RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Kato Zsolnai-Fehir."}, {"start": 4.64, "end": 7.12, "text": " Let's talk about snow simulations."}, {"start": 7.12, "end": 10.88, "text": " Being able to simulate snow on our computers is not new."}, {"start": 10.88, "end": 13.120000000000001, "text": " It's been possible for a few years now."}, {"start": 13.120000000000001, "end": 19.92, "text": " For instance, this legendary Disney paper from 2013 was capable of performing that."}, {"start": 19.92, "end": 24.400000000000002, "text": " So, why do researchers write new papers on this topic?"}, {"start": 24.400000000000002, "end": 28.64, "text": " Well, because the 2013 paper had its limitations."}, {"start": 28.64, "end": 33.52, "text": " Let's talk about them while looking at some really sweet footage from a new paper."}, {"start": 33.52, "end": 37.68, "text": " Limitation number one for previous works is that there's no simulations"}, {"start": 37.68, "end": 39.6, "text": " were sinfully expensive."}, {"start": 39.6, "end": 44.32, "text": " Most of them took not seconds and sometimes not even minutes,"}, {"start": 44.32, "end": 46.400000000000006, "text": " but half an hour per frame."}, {"start": 47.28, "end": 49.760000000000005, "text": " Yes, that means all nighter simulations."}, {"start": 50.400000000000006, "end": 53.760000000000005, "text": " They were also not as good in fracturing interactions,"}, {"start": 53.76, "end": 58.08, "text": " simulating powder snow and avalanches were also out of reach."}, {"start": 58.8, "end": 63.199999999999996, "text": " Until now, you see, simulating snow is quite challenging."}, {"start": 63.199999999999996, "end": 67.6, "text": " It clamps up, deforms, breaks and hardens under compression."}, {"start": 68.16, "end": 72.16, "text": " And even my favorite phase change from fluid to snow."}, {"start": 72.16, "end": 75.28, "text": " These are all really challenging to simulate properly,"}, {"start": 75.28, "end": 81.12, "text": " and in a moment you will see that this new method is capable of even more beyond that."}, {"start": 81.12, "end": 83.12, "text": " Let's start with friction."}, {"start": 83.12, "end": 85.12, "text": " First, we turn on the snow machine,"}, {"start": 85.12, "end": 89.12, "text": " and then engage the wipers."}, {"start": 89.12, "end": 91.12, "text": " That looks wonderful."}, {"start": 91.12, "end": 95.12, "text": " And now, may I request seeing some tire marks?"}, {"start": 95.12, "end": 97.12, "text": " There you go."}, {"start": 97.12, "end": 99.12, "text": " This looks really good,"}, {"start": 99.12, "end": 103.12, "text": " so how about taking a closer look at this phenomenon?"}, {"start": 103.12, "end": 105.12, "text": " VB here is the boundary friction coefficient,"}, {"start": 105.12, "end": 109.12, "text": " and it is a parameter that can be chosen freely by us."}, {"start": 109.12, "end": 111.12, "text": " So let's see what that looks like."}, {"start": 111.12, "end": 113.12, "text": " If we initialize this to a low value,"}, {"start": 113.12, "end": 115.12, "text": " we'll get very little friction."}, {"start": 117.12, "end": 119.12, "text": " And if we crank this up,"}, {"start": 119.12, "end": 123.12, "text": " look, things get a great deal more sticky."}, {"start": 125.12, "end": 129.12, "text": " A big clump of snow also breaks apart in a spectacular manner,"}, {"start": 129.12, "end": 131.12, "text": " also showcasing compression and fracturing,"}, {"start": 131.12, "end": 135.12, "text": " beyond the boundary friction effect we just looked at."}, {"start": 135.12, "end": 137.12, "text": " Oh my, this is really good."}, {"start": 137.12, "end": 139.12, "text": " Oh my, this is beautiful."}, {"start": 139.12, "end": 143.12, "text": " Okay, now, this is a computer graphics paper."}, {"start": 143.12, "end": 145.12, "text": " If you're a seasoned fellow scholar,"}, {"start": 145.12, "end": 147.12, "text": " you know what this means."}, {"start": 147.12, "end": 151.12, "text": " It means that it is time to put some virtual bunnies into the oven."}, {"start": 151.12, "end": 153.12, "text": " This is a great example for rule number one,"}, {"start": 153.12, "end": 155.12, "text": " for watching physics simulations,"}, {"start": 155.12, "end": 157.12, "text": " which is that we discussed the physics part,"}, {"start": 157.12, "end": 159.12, "text": " and not the visuals."}, {"start": 159.12, "end": 163.12, "text": " So why are these bunnies blue?"}, {"start": 163.12, "end": 165.12, "text": " Well, let's chuck them up."}, {"start": 165.12, "end": 169.12, "text": " Well, let's chuck them into the oven and find out."}, {"start": 173.12, "end": 177.12, "text": " Aha, they are color coded for temperature."}, {"start": 177.12, "end": 180.12, "text": " Look, they start from minus 100 Celsius,"}, {"start": 180.12, "end": 182.12, "text": " that's the blue,"}, {"start": 182.12, "end": 186.12, "text": " and we see the colors change as they approach 0 degrees Celsius."}, {"start": 186.12, "end": 189.12, "text": " At this point, they don't yet start melting,"}, {"start": 189.12, "end": 191.12, "text": " but they are already falling apart."}, {"start": 191.12, "end": 195.12, "text": " So it was indeed a good design decision to show the temperatures,"}, {"start": 195.12, "end": 199.12, "text": " because it tells us exactly what is going on here."}, {"start": 199.12, "end": 203.12, "text": " Without this, we would be expecting melting."}, {"start": 203.12, "end": 206.12, "text": " Well, can we see that melting in action too?"}, {"start": 206.12, "end": 207.12, "text": " You bet."}, {"start": 207.12, "end": 209.12, "text": " Now, hold on to your papers,"}, {"start": 209.12, "end": 212.12, "text": " and bring forth the soft isomet."}, {"start": 212.12, "end": 215.12, "text": " This machine can not only create an exquisite dessert"}, {"start": 215.12, "end": 217.12, "text": " for computer graphics researchers,"}, {"start": 217.12, "end": 222.12, "text": " but also showcases the individual contributions of this new technique,"}, {"start": 222.12, "end": 223.12, "text": " one by one."}, {"start": 223.12, "end": 225.12, "text": " There is the melting."}, {"start": 225.12, "end": 227.12, "text": " Yes."}, {"start": 227.12, "end": 231.12, "text": " Add a little frosting, and there you go."}, {"start": 233.12, "end": 235.12, "text": " Bon app\u00e9tit."}, {"start": 235.12, "end": 238.12, "text": " Now, as we feared, many of these larger-scale simulations"}, {"start": 238.12, "end": 242.12, "text": " require computing the physics for millions of particles."}, {"start": 242.12, "end": 245.12, "text": " So how long does that take?"}, {"start": 245.12, "end": 247.12, "text": " When we need millions of particles,"}, {"start": 247.12, "end": 250.12, "text": " we typically have to wait a few minutes per frame,"}, {"start": 250.12, "end": 252.12, "text": " but if we have a smaller scene,"}, {"start": 252.12, "end": 254.12, "text": " we can get away with these computations"}, {"start": 254.12, "end": 257.12, "text": " in a few seconds per frame."}, {"start": 257.12, "end": 258.12, "text": " Goodness."}, {"start": 258.12, "end": 264.12, "text": " We went from hours per frame to seconds per frame in just one paper."}, {"start": 264.12, "end": 266.12, "text": " Outstanding work."}, {"start": 266.12, "end": 268.12, "text": " And also, wait a second."}, {"start": 268.12, "end": 271.12, "text": " If we are talking millions of particles,"}, {"start": 271.12, "end": 274.12, "text": " I wonder how much memory it takes to keep track of them."}, {"start": 274.12, "end": 276.12, "text": " Let's see."}, {"start": 276.12, "end": 277.12, "text": " Whoa."}, {"start": 277.12, "end": 279.12, "text": " This is very appealing."}, {"start": 279.12, "end": 282.12, "text": " I was expecting a few gigabytes,"}, {"start": 282.12, "end": 285.12, "text": " yet it only asks for a fraction of it,"}, {"start": 285.12, "end": 287.12, "text": " a couple hundred megabytes."}, {"start": 287.12, "end": 290.12, "text": " So, with this hefty value proposition,"}, {"start": 290.12, "end": 293.12, "text": " it is no wonder that this paper has been accepted"}, {"start": 293.12, "end": 295.12, "text": " to the C-Graph conference."}, {"start": 295.12, "end": 298.12, "text": " This is the Olympic Gold Medal of Computer Graphics Research,"}, {"start": 298.12, "end": 299.12, "text": " if you will."}, {"start": 299.12, "end": 301.12, "text": " Huge congratulations to the authors of this paper."}, {"start": 301.12, "end": 304.12, "text": " This was quite an experience."}, {"start": 304.12, "end": 306.12, "text": " What a time to be alive."}, {"start": 306.12, "end": 309.12, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 309.12, "end": 313.12, "text": " If you are looking for inexpensive Cloud GPUs for AI,"}, {"start": 313.12, "end": 315.12, "text": " check out Lambda GPU Cloud."}, {"start": 315.12, "end": 320.12, "text": " They recently launched Quadro RTX 6000, RTX 8000,"}, {"start": 320.12, "end": 322.12, "text": " and V100 instances."}, {"start": 322.12, "end": 324.12, "text": " And hold onto your papers,"}, {"start": 324.12, "end": 329.12, "text": " because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 329.12, "end": 333.12, "text": " Plus, they are the only Cloud service with 48 GB,"}, {"start": 333.12, "end": 335.12, "text": " RTX 8000."}, {"start": 335.12, "end": 338.12, "text": " Join researchers at organizations like Apple,"}, {"start": 338.12, "end": 341.12, "text": " MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 341.12, "end": 343.12, "text": " workstations, or servers."}, {"start": 343.12, "end": 345.12, "text": " Make sure to go to LambdaLabs.com,"}, {"start": 345.12, "end": 350.12, "text": " slash papers to sign up for one of their amazing GPU instances today."}, {"start": 350.12, "end": 352.12, "text": " Our thanks to Lambda for their long-standing support"}, {"start": 352.12, "end": 355.12, "text": " and for helping us make better videos for you."}, {"start": 355.12, "end": 358.12, "text": " Thanks for watching and for your generous support,"}, {"start": 358.12, "end": 360.12, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=mb6WJ34xQXg
This Neural Network Makes Virtual Humans Dance! 🕺
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/in-between/reports/-Overview-Robust-Motion-In-betweening---Vmlldzo0MzkzMzA 📝 The paper "Robust Motion In-betweening" is available here: - https://static-wordpress.akamaized.net/montreal.ubisoft.com/wp-content/uploads/2020/07/09155337/RobustMotionInbetweening.pdf - https://montreal.ubisoft.com/en/automatic-in-betweening-for-faster-animation-authoring/ Dataset: https://github.com/XefPatterson/Ubisoft-LaForge-Animation-Dataset 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Thumbnail background image credit: https://pixabay.com/images/id-2122473/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is two-minute papers with Dr. Karajon Aifahir. Most people think that if we have a piece of camera footage that is a little choppy, then there is nothing we can do with it, we better throw it away. Is that true? No, not at all. Earlier, we discussed two potential techniques to remedy this common problem. The problem statement is simple, in goes a choppy video, something happens, and then out comes a smooth and creamy video. This process is often referred to as frame interpolation or frame in between. And of course, it's easier said than done. If it works well, it really looks like magic, much like in the science fiction movies. So what are the potential some things that we can use to make this happen? One optical flow. This is an originally handcrafted method that tries to predict the motion that takes place between these frames. This can kind of produce new information, and I use this in these videos on a regular basis, but the output footage also has to be carefully inspected for unwanted artifacts, which are a relatively common occurrence. Two, we can also try to give a bunch of training data to a neural network and teach it to perform this frame in between. And if we do, the results are magnificent. We can do so much with this. But if we can do this for video frames, here is a crazy idea. How about a similar kind of in-betweening for animating humanoids? That would really be something else, and it would save us so much time and work. Let's see what this new method can do in this area. The value proposition of this technique is as simple as it gets. To set up a bunch of keyframes, these are the transparent figures, and the neural network creates realistic motion that transitions from one stage to the next one. Look, it really seems to be able to do it all, it can perform twists and turns, wrist walks and runs, and you will see in a minute even dance moves. Hmm, this in-betweening for animating humanoid motion idea may not be so crazy after all. Once more, this could be super useful for artists working in the industry who can not only do all this, but they can also set up movement variations by moving the keyframes around spatially. Or, we can even set up temporal variations to create different timings for the movement. Excellent. Of course, it cannot do everything if we set up the intermediate stages in a way that uncommon motions would be required to fill in, we might end up with one of these failure cases. And all these results depend on how much training data we have with the kinds of motion we need to fill in. Let's have a look at a more detailed example. This smooth chap has been given lots of training data with dancing moves, and look. When we pull out these dance moves from his training data, he becomes a drunkard. So talking about training data, how much motion capture footage was given to this algorithm? Well, it used the Ubisoft La Forge animation dataset. This contains five subjects, 77 sequences, and about 4.5 hours of footage in total. Wow, that is not that much. For instance, it only has eight movement sequences for dancing. That is not much at all. And we've already seen that the model can dance. That is some serious data efficiency, especially given that it can even climb through obstacles. So much knowledge has been extracted from so little data. It truly feels like we are living in a science fiction world. What a time to be alive. So, when we write a paper like this, how do we compare the results to previous techniques? How can we decide which technique is better? Well, the level one solution is a user study. We call some folks in, show them the footage, ask which one they like best, the previous method, or this one. That would work, but of course, it is quite laborious, but fortunately, there is a level two solution. And this level two solution is called normalized power spectrum similarity, NPSS in short. This is a number that we can produce with the computer. No humans are required, and it measures how believable these motions are. And the key of NPSS is that it correlates with human judgment, or in other words, if this says that the technique is better, then it is likely that humans would also come to the same conclusion. So, let's see those results. Here are the previous methods. NPSS is subject to minimization, in other words, the lower, the better, and let's see the new method. Oh, yes, it indeed outpaces the competition. So there is no wonder that this incredible paper was accepted to the SIGRAPH Asia conference. What does that mean exactly? If research were the Olympics, a SIGRAPH where SIGRAPH Asia paper would be the gold medal. And this was Mr. Felix Harvey's first few papers. Huge congratulations. And as an additional goodie, it can create an animation of me when I lost my papers. And this is me when I found them. Do you have some more ideas on how we could put such an amazing technique to use? Let me know in the comments below. What you see here is a report of this exact paper we have talked about, which was made by weights and biases. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics, and open source projects. This really is as good as it gets and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.88, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajon Aifahir."}, {"start": 4.88, "end": 10.120000000000001, "text": " Most people think that if we have a piece of camera footage that is a little choppy, then"}, {"start": 10.120000000000001, "end": 13.92, "text": " there is nothing we can do with it, we better throw it away."}, {"start": 13.92, "end": 14.92, "text": " Is that true?"}, {"start": 14.92, "end": 16.96, "text": " No, not at all."}, {"start": 16.96, "end": 21.68, "text": " Earlier, we discussed two potential techniques to remedy this common problem."}, {"start": 21.68, "end": 28.16, "text": " The problem statement is simple, in goes a choppy video, something happens, and then out"}, {"start": 28.16, "end": 30.64, "text": " comes a smooth and creamy video."}, {"start": 30.64, "end": 36.84, "text": " This process is often referred to as frame interpolation or frame in between."}, {"start": 36.84, "end": 39.68, "text": " And of course, it's easier said than done."}, {"start": 39.68, "end": 45.24, "text": " If it works well, it really looks like magic, much like in the science fiction movies."}, {"start": 45.24, "end": 50.36, "text": " So what are the potential some things that we can use to make this happen?"}, {"start": 50.36, "end": 52.2, "text": " One optical flow."}, {"start": 52.2, "end": 57.36, "text": " This is an originally handcrafted method that tries to predict the motion that takes place"}, {"start": 57.36, "end": 59.24, "text": " between these frames."}, {"start": 59.24, "end": 65.32, "text": " This can kind of produce new information, and I use this in these videos on a regular basis,"}, {"start": 65.32, "end": 71.4, "text": " but the output footage also has to be carefully inspected for unwanted artifacts, which are"}, {"start": 71.4, "end": 73.4, "text": " a relatively common occurrence."}, {"start": 73.4, "end": 79.48, "text": " Two, we can also try to give a bunch of training data to a neural network and teach it to perform"}, {"start": 79.48, "end": 81.48, "text": " this frame in between."}, {"start": 81.48, "end": 85.16, "text": " And if we do, the results are magnificent."}, {"start": 85.16, "end": 87.84, "text": " We can do so much with this."}, {"start": 87.84, "end": 92.92, "text": " But if we can do this for video frames, here is a crazy idea."}, {"start": 92.92, "end": 98.28, "text": " How about a similar kind of in-betweening for animating humanoids?"}, {"start": 98.28, "end": 103.64, "text": " That would really be something else, and it would save us so much time and work."}, {"start": 103.64, "end": 106.96, "text": " Let's see what this new method can do in this area."}, {"start": 106.96, "end": 111.12, "text": " The value proposition of this technique is as simple as it gets."}, {"start": 111.12, "end": 116.2, "text": " To set up a bunch of keyframes, these are the transparent figures, and the neural network"}, {"start": 116.2, "end": 121.48, "text": " creates realistic motion that transitions from one stage to the next one."}, {"start": 121.48, "end": 127.2, "text": " Look, it really seems to be able to do it all, it can perform twists and turns, wrist"}, {"start": 127.2, "end": 132.84, "text": " walks and runs, and you will see in a minute even dance moves."}, {"start": 132.84, "end": 139.64000000000001, "text": " Hmm, this in-betweening for animating humanoid motion idea may not be so crazy after all."}, {"start": 139.64, "end": 144.6, "text": " Once more, this could be super useful for artists working in the industry who can not only"}, {"start": 144.6, "end": 150.76, "text": " do all this, but they can also set up movement variations by moving the keyframes around"}, {"start": 150.76, "end": 151.76, "text": " spatially."}, {"start": 151.76, "end": 159.76, "text": " Or, we can even set up temporal variations to create different timings for the movement."}, {"start": 159.76, "end": 160.76, "text": " Excellent."}, {"start": 160.76, "end": 166.07999999999998, "text": " Of course, it cannot do everything if we set up the intermediate stages in a way that"}, {"start": 166.08, "end": 171.04000000000002, "text": " uncommon motions would be required to fill in, we might end up with one of these failure"}, {"start": 171.04000000000002, "end": 172.36, "text": " cases."}, {"start": 172.36, "end": 177.36, "text": " And all these results depend on how much training data we have with the kinds of motion"}, {"start": 177.36, "end": 178.68, "text": " we need to fill in."}, {"start": 178.68, "end": 181.84, "text": " Let's have a look at a more detailed example."}, {"start": 181.84, "end": 190.04000000000002, "text": " This smooth chap has been given lots of training data with dancing moves, and look."}, {"start": 190.04, "end": 196.2, "text": " When we pull out these dance moves from his training data, he becomes a drunkard."}, {"start": 196.2, "end": 202.04, "text": " So talking about training data, how much motion capture footage was given to this algorithm?"}, {"start": 202.04, "end": 206.56, "text": " Well, it used the Ubisoft La Forge animation dataset."}, {"start": 206.56, "end": 214.28, "text": " This contains five subjects, 77 sequences, and about 4.5 hours of footage in total."}, {"start": 214.28, "end": 217.16, "text": " Wow, that is not that much."}, {"start": 217.16, "end": 221.84, "text": " For instance, it only has eight movement sequences for dancing."}, {"start": 221.84, "end": 223.88, "text": " That is not much at all."}, {"start": 223.88, "end": 227.16, "text": " And we've already seen that the model can dance."}, {"start": 227.16, "end": 233.6, "text": " That is some serious data efficiency, especially given that it can even climb through obstacles."}, {"start": 233.6, "end": 237.51999999999998, "text": " So much knowledge has been extracted from so little data."}, {"start": 237.51999999999998, "end": 241.4, "text": " It truly feels like we are living in a science fiction world."}, {"start": 241.4, "end": 243.04, "text": " What a time to be alive."}, {"start": 243.04, "end": 248.84, "text": " So, when we write a paper like this, how do we compare the results to previous techniques?"}, {"start": 248.84, "end": 251.88, "text": " How can we decide which technique is better?"}, {"start": 251.88, "end": 255.32, "text": " Well, the level one solution is a user study."}, {"start": 255.32, "end": 260.36, "text": " We call some folks in, show them the footage, ask which one they like best, the previous"}, {"start": 260.36, "end": 262.36, "text": " method, or this one."}, {"start": 262.36, "end": 268.36, "text": " That would work, but of course, it is quite laborious, but fortunately, there is a level two"}, {"start": 268.36, "end": 269.44, "text": " solution."}, {"start": 269.44, "end": 276.28, "text": " And this level two solution is called normalized power spectrum similarity, NPSS in short."}, {"start": 276.28, "end": 279.16, "text": " This is a number that we can produce with the computer."}, {"start": 279.16, "end": 284.4, "text": " No humans are required, and it measures how believable these motions are."}, {"start": 284.4, "end": 289.76, "text": " And the key of NPSS is that it correlates with human judgment, or in other words, if"}, {"start": 289.76, "end": 295.12, "text": " this says that the technique is better, then it is likely that humans would also come"}, {"start": 295.12, "end": 296.92, "text": " to the same conclusion."}, {"start": 296.92, "end": 299.32, "text": " So, let's see those results."}, {"start": 299.32, "end": 301.36, "text": " Here are the previous methods."}, {"start": 301.36, "end": 308.0, "text": " NPSS is subject to minimization, in other words, the lower, the better, and let's see the"}, {"start": 308.0, "end": 309.48, "text": " new method."}, {"start": 309.48, "end": 314.71999999999997, "text": " Oh, yes, it indeed outpaces the competition."}, {"start": 314.71999999999997, "end": 321.0, "text": " So there is no wonder that this incredible paper was accepted to the SIGRAPH Asia conference."}, {"start": 321.0, "end": 322.84, "text": " What does that mean exactly?"}, {"start": 322.84, "end": 328.84, "text": " If research were the Olympics, a SIGRAPH where SIGRAPH Asia paper would be the gold medal."}, {"start": 328.84, "end": 332.96, "text": " And this was Mr. Felix Harvey's first few papers."}, {"start": 332.96, "end": 334.64, "text": " Huge congratulations."}, {"start": 334.64, "end": 342.4, "text": " And as an additional goodie, it can create an animation of me when I lost my papers."}, {"start": 342.4, "end": 344.88, "text": " And this is me when I found them."}, {"start": 344.88, "end": 349.79999999999995, "text": " Do you have some more ideas on how we could put such an amazing technique to use?"}, {"start": 349.79999999999995, "end": 351.59999999999997, "text": " Let me know in the comments below."}, {"start": 351.59999999999997, "end": 356.12, "text": " What you see here is a report of this exact paper we have talked about, which was made"}, {"start": 356.12, "end": 357.84, "text": " by weights and biases."}, {"start": 357.84, "end": 360.0, "text": " I put a link to it in the description."}, {"start": 360.0, "end": 361.0, "text": " Make sure to have a look."}, {"start": 361.0, "end": 364.56, "text": " I think it helps you understand this paper better."}, {"start": 364.56, "end": 369.23999999999995, "text": " If you work with learning algorithms on a regular basis, make sure to check out weights"}, {"start": 369.23999999999995, "end": 370.23999999999995, "text": " and biases."}, {"start": 370.23999999999995, "end": 375.23999999999995, "text": " Their system is designed to help you organize your experiments and it is so good it could"}, {"start": 375.23999999999995, "end": 380.52, "text": " shave off weeks or even months of work from your projects and is completely free for all"}, {"start": 380.52, "end": 384.55999999999995, "text": " individuals, academics, and open source projects."}, {"start": 384.56, "end": 389.2, "text": " This really is as good as it gets and it is hardly a surprise that they are now used by"}, {"start": 389.2, "end": 393.12, "text": " over 200 companies and research institutions."}, {"start": 393.12, "end": 398.68, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 398.68, "end": 401.76, "text": " description and you can get a free demo today."}, {"start": 401.76, "end": 406.48, "text": " Our thanks to weights and biases for their longstanding support and for helping us make"}, {"start": 406.48, "end": 407.84000000000003, "text": " better videos for you."}, {"start": 407.84, "end": 414.84, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=7O7W-_FKRMQ
Episode 500 - 8 Years Of Progress In Cloth Simulations! 👕
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/stacey/yolo-drive/reports/Bounding-Boxes-for-Object-Detection--Vmlldzo4Nzg4MQ 📝 The paper "Robust Eulerian-On-Lagrangian Rods" is available here: http://mslab.es/projects/RobustEOLRods/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Kenneth Davis, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers, episode number 500 with Dr. Karajol Nehfehir. And on this glorious day, we are going to simulate the kinematics of yarn and cloth on our computers. We will transition into today's paper in a moment, but for context, here is a wonderful work to show you what we were able to do in 2012, and we will see how far we have come since. This previous work was about creating these highly detailed cloth geometries for digital characters. Here you see one of its coolest results where it shows how the simulated forces pull the entire piece of garment together. We start out with dreaming up a piece of cloth geometry, and this simulator gradually transforms it into a real-world version of that by subjecting it to real physical forces. This is a step that we call your level relaxation. A few years ago, when I worked at Disney Research, I attended to the talk of the Oscar-a-world winning researcher Steve Martiner, who presented this paper. And when I saw these results, shockwaves went through my body. It was one of my truly formative hold onto your paper's moments that I never forget. Now note that to produce these results, one had to wait for hours and hours to compute all these interactions. So, this paper was published in 2012, and now nearly nine years have passed, so I wonder how far we have come since. Well, let's see together. Today, with this new technique, we can conjure up similar animations where the pieces of garments tighten. Beautiful. Now, let's look under the hood of the simulator and... Well, well, well. Do you see what I see here? Red dots. So, why did I get so excited about a couple red dots? Let's find out together. These red dots solve a fundamental problem when simulating the movement of these yarns. The issue is that in the mathematical description of this problem, there is a stiffness term that does not behave well when two of these points slide too close to each other. Interestingly, our simulated material gets infinitely stiff in these points. This is incorrect behavior, and it makes the simulation unstable. Not good. So, what do we do to alleviate this? Now, we can use this new technique that detects these cases and addresses them by introducing these additional red nodes. These are used as a stand-in until things stabilize. Look, we wait until these two points slide off of each other, and now the distances are large enough so that the mathematical framework can regain its validity and compute the stiffness term correctly. And, look, the red dot disappears and the simulation can continue without breaking. So, if we go back to another piece of under-the-hood footage, we now understand why these red dots come and go. They come when two nodes get too close to each other, and they disappear as they pass each other, keeping the simulation intact. And, with this method, we can simulate this beautiful phenomenon when we throw a piece of garment on the sphere, and all kinds of stretching and sliding takes place. Marvelous! So, what else can this do? Oh boy, it can even simulate multiple cloth layers, look at the pocket and the stitching patterns here. Beautiful. We can also put a neck tag on this shirt, and start stretching and sharing it into oblivion, pay special attention to the difference in how the shirt and the neck tag reacts to the same forces. We can also stack three tablecloths on top of each other, and see how they would behave if we would not simulate friction. And now, the same footage with friction. Much more realistic. And if we look under the hood, you see that the algorithm is doing a ton of work with these red nodes. Look, the table nodes that they had to insert tens of thousands of these nodes to keep the simulation intact. So, how long do we have to wait for a simulation like this? The 2012 paper took several hours, so what about this one? Well, this says that we need a few seconds per time step, and typically several time steps correspond to one frame. So, where does this put us? Well, it puts us in the domain of not hours per every frame of animation here, but two minutes and sometimes even seconds per frame. And not only that, but this simulator is also more robust as it can deal with these unpleasant cases where these points get too close to each other. So, I think this was a great testament to the amazing rate of progress in computer graphics research. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use their tool to draw bounding boxes for object detection and even more importantly, how to debug them. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 7.0, "text": " Dear Fellow Scholars, this is Two Minute Papers, episode number 500 with Dr. Karajol Nehfehir."}, {"start": 7.0, "end": 16.0, "text": " And on this glorious day, we are going to simulate the kinematics of yarn and cloth on our computers."}, {"start": 16.0, "end": 26.0, "text": " We will transition into today's paper in a moment, but for context, here is a wonderful work to show you what we were able to do in 2012,"}, {"start": 26.0, "end": 35.0, "text": " and we will see how far we have come since. This previous work was about creating these highly detailed cloth geometries for digital characters."}, {"start": 35.0, "end": 44.0, "text": " Here you see one of its coolest results where it shows how the simulated forces pull the entire piece of garment together."}, {"start": 44.0, "end": 56.0, "text": " We start out with dreaming up a piece of cloth geometry, and this simulator gradually transforms it into a real-world version of that by subjecting it to real physical forces."}, {"start": 56.0, "end": 60.0, "text": " This is a step that we call your level relaxation."}, {"start": 60.0, "end": 69.0, "text": " A few years ago, when I worked at Disney Research, I attended to the talk of the Oscar-a-world winning researcher Steve Martiner, who presented this paper."}, {"start": 69.0, "end": 79.0, "text": " And when I saw these results, shockwaves went through my body. It was one of my truly formative hold onto your paper's moments that I never forget."}, {"start": 79.0, "end": 86.0, "text": " Now note that to produce these results, one had to wait for hours and hours to compute all these interactions."}, {"start": 86.0, "end": 95.0, "text": " So, this paper was published in 2012, and now nearly nine years have passed, so I wonder how far we have come since."}, {"start": 95.0, "end": 104.0, "text": " Well, let's see together. Today, with this new technique, we can conjure up similar animations where the pieces of garments tighten."}, {"start": 104.0, "end": 109.0, "text": " Beautiful. Now, let's look under the hood of the simulator and..."}, {"start": 109.0, "end": 115.0, "text": " Well, well, well. Do you see what I see here? Red dots."}, {"start": 115.0, "end": 121.0, "text": " So, why did I get so excited about a couple red dots? Let's find out together."}, {"start": 121.0, "end": 127.0, "text": " These red dots solve a fundamental problem when simulating the movement of these yarns."}, {"start": 127.0, "end": 137.0, "text": " The issue is that in the mathematical description of this problem, there is a stiffness term that does not behave well when two of these points slide too close to each other."}, {"start": 137.0, "end": 142.0, "text": " Interestingly, our simulated material gets infinitely stiff in these points."}, {"start": 142.0, "end": 148.0, "text": " This is incorrect behavior, and it makes the simulation unstable. Not good."}, {"start": 148.0, "end": 151.0, "text": " So, what do we do to alleviate this?"}, {"start": 151.0, "end": 159.0, "text": " Now, we can use this new technique that detects these cases and addresses them by introducing these additional red nodes."}, {"start": 159.0, "end": 163.0, "text": " These are used as a stand-in until things stabilize."}, {"start": 163.0, "end": 177.0, "text": " Look, we wait until these two points slide off of each other, and now the distances are large enough so that the mathematical framework can regain its validity and compute the stiffness term correctly."}, {"start": 177.0, "end": 184.0, "text": " And, look, the red dot disappears and the simulation can continue without breaking."}, {"start": 184.0, "end": 192.0, "text": " So, if we go back to another piece of under-the-hood footage, we now understand why these red dots come and go."}, {"start": 192.0, "end": 200.0, "text": " They come when two nodes get too close to each other, and they disappear as they pass each other, keeping the simulation intact."}, {"start": 200.0, "end": 211.0, "text": " And, with this method, we can simulate this beautiful phenomenon when we throw a piece of garment on the sphere, and all kinds of stretching and sliding takes place."}, {"start": 211.0, "end": 212.0, "text": " Marvelous!"}, {"start": 212.0, "end": 214.0, "text": " So, what else can this do?"}, {"start": 214.0, "end": 221.0, "text": " Oh boy, it can even simulate multiple cloth layers, look at the pocket and the stitching patterns here."}, {"start": 221.0, "end": 222.0, "text": " Beautiful."}, {"start": 222.0, "end": 235.0, "text": " We can also put a neck tag on this shirt, and start stretching and sharing it into oblivion, pay special attention to the difference in how the shirt and the neck tag reacts to the same forces."}, {"start": 235.0, "end": 244.0, "text": " We can also stack three tablecloths on top of each other, and see how they would behave if we would not simulate friction."}, {"start": 244.0, "end": 247.0, "text": " And now, the same footage with friction."}, {"start": 247.0, "end": 253.0, "text": " Much more realistic."}, {"start": 253.0, "end": 262.0, "text": " And if we look under the hood, you see that the algorithm is doing a ton of work with these red nodes."}, {"start": 262.0, "end": 271.0, "text": " Look, the table nodes that they had to insert tens of thousands of these nodes to keep the simulation intact."}, {"start": 271.0, "end": 280.0, "text": " So, how long do we have to wait for a simulation like this? The 2012 paper took several hours, so what about this one?"}, {"start": 280.0, "end": 288.0, "text": " Well, this says that we need a few seconds per time step, and typically several time steps correspond to one frame."}, {"start": 288.0, "end": 290.0, "text": " So, where does this put us?"}, {"start": 290.0, "end": 309.0, "text": " Well, it puts us in the domain of not hours per every frame of animation here, but two minutes and sometimes even seconds per frame. And not only that, but this simulator is also more robust as it can deal with these unpleasant cases where these points get too close to each other."}, {"start": 309.0, "end": 317.0, "text": " So, I think this was a great testament to the amazing rate of progress in computer graphics research. What a time to be alive."}, {"start": 317.0, "end": 329.0, "text": " This episode has been supported by weights and biases. In this post, they show you how to use their tool to draw bounding boxes for object detection and even more importantly, how to debug them."}, {"start": 329.0, "end": 343.0, "text": " During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight."}, {"start": 343.0, "end": 356.0, "text": " And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more."}, {"start": 356.0, "end": 363.0, "text": " And get this, weights and biases is free for all individuals, academics, and open source projects."}, {"start": 363.0, "end": 373.0, "text": " Make sure to visit them through wnba.com slash papers or just click the link in the video description and you can get a free demo today."}, {"start": 373.0, "end": 378.0, "text": " Our thanks to weights and biases for their longstanding support and for helping us make better videos for you."}, {"start": 378.0, "end": 402.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Aq93TSau8GE
This AI Learned To Create Dynamic Photos! 🌁
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their report on this paper is available here: https://wandb.ai/wandb/xfields/reports/-Overview-X-Fields-Implicit-Neural-View-Light-and-Time-Image-Interpolation--Vmlldzo0MTY0MzM 📝 The paper "X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation" is available here: http://xfields.mpi-inf.mpg.de/ 📝 Our paper on neural rendering (and more!) is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 📝 Our earlier paper with high-resolution images for the caustics is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/adaptive_metropolis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-820011/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjola Ifeher. Approximately five months ago, we talked about a technique called Neural Radiance Fields or Nerf in short, where the input is the location of the camera and an image of what the camera sees. We take a few of those, give them to a neural network to learn them and synthesize new, previously unseen views of not just the materials in the scene, but the entire scene itself. In short, we take a few samples and the neural network learns what should be there between the samples. In comes non-continuous data, a bunch of photos, and out goes a continuous video where the AI feels in the data between these samples. With this we can change the view direction, but only that. This concept can also be used for other variables. For instance, this work is able to change the lighting, but only the lighting. By the way, this is from long ago from around two minute papers episode number 13, so our season fellow scholars know that this was almost 500 episodes ago. Or the third potential variable is time. With this AI-based physics simulator, we can advance the time and the algorithm would try to guess how a piece of fluid would evolve over time. This was amazing, but as you might have guessed, we can advance time, but only the time. And this was just a couple of examples from a slow of works that are capable of doing one or at most two of these. These are all amazing techniques, but they offer separate features. One can change the view, but nothing else, one for the illumination, but nothing else, and one for time, but nothing else. To the advent of neural network based learning algorithms, I wonder if it is possible to create an algorithm that does all three. Or is this just science fiction? Well, hold on to your papers because this new work that goes by the name X Fields, we can indeed change the time and the view direction and the lighting separately. Or even better, do all three at the same time. Look at how we can play with the time back and forth and set the fluid levels as we desire, that is the time part, and we can also play with the other two parameters as well at the same time. But still, the results that we see here can range from absolutely amazing to trivial depending on just one factor. And that factor is how much training data was available for the algorithm. neural networks typically require loads of training data to learn a new concept. For instance, if we wish to teach a neural network what a cat is, we have to show it thousands and thousands of images of cats. So how much training data is needed for this? And now hold on to your papers and... Whoa! Look at these five dots here. Do you know what this means? It means that all the AISR was five images, that is five samples from the scene with different light positions and it could fill in all the missing details with such accuracy that we can create this smooth and creamy transition. It almost feels like we have made at least a hundred photographs of the scene. And all this from five input photos. Absolutely amazing. Now, here is my other favorite example. I am a light transport simulation researcher by trade, so by definition I love caustics. A caustic is a beautiful phenomenon in nature where curved surfaces reflect or reflect light and concentrate it to a relatively small area. I hope that you are not surprised when I say that it is the favorite phenomenon of most light transport researchers. And just look at how beautifully it deals with it. You can take any of these intermediate AI generated images and sell them as real ones and I doubt anyone would notice. So, it does three things that previous techniques could do one by one, but really how does its quality compare to these previous methods? Let's see how it does on thin geometry which is a notoriously difficult case for these methods. Here is a previous one. Look, the thick part is reconstructed correctly. However, look at the missing top of the grass blade. Yep, that's gone. A different previous technique by the name Local Lightfield Fusion not only missed the top as well, but also introduced halo-like artifacts to the scene. And as you can see with this footage, the new method solves all of these problems really well. And it is quite close to the true reference footage that we kept hidden from the AI. Perhaps the best part is that it also has an online demo that you can try right now, so make sure to click the link in the video description to have a look. Of course, not even this technique is perfect, there are cases where it might confuse the foreground with the background and we are still not out of the water when it comes to thin geometry. Also, an extension that I would love to see is changing material properties. Here, you see some results from our earlier paper on Neural Rendering where we can change the material properties of this test object and get a near perfect photorealistic image of it in about 5 milliseconds per image. I would love to see it combined with a technique like this one and while it looks super challenging, it is easily possible that we will have something like that within two years. The link to our Neural Rendering paper and its source code is also available in the video description. What a time to be alive! What you see here is a report of this exact paper we have talked about which was made by Wates and Biasis. I put a link to it in the description, make sure to have a look, I think it helps you understand this paper better. Wates and Biasis provides tools to track your experiments in your deep learning projects. Your system is designed to save you a ton of time and money and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more. And the best part is that Wates and Biasis is free for all individuals, academics and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Thanks for watching and for your generous support and I'll see you next time!
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karajjola Ifeher."}, {"start": 5.0, "end": 10.3, "text": " Approximately five months ago, we talked about a technique called Neural Radiance Fields"}, {"start": 10.3, "end": 16.46, "text": " or Nerf in short, where the input is the location of the camera and an image of what the camera"}, {"start": 16.46, "end": 17.56, "text": " sees."}, {"start": 17.56, "end": 23.900000000000002, "text": " We take a few of those, give them to a neural network to learn them and synthesize new, previously"}, {"start": 23.900000000000002, "end": 29.98, "text": " unseen views of not just the materials in the scene, but the entire scene itself."}, {"start": 29.98, "end": 35.58, "text": " In short, we take a few samples and the neural network learns what should be there between"}, {"start": 35.58, "end": 36.980000000000004, "text": " the samples."}, {"start": 36.980000000000004, "end": 43.22, "text": " In comes non-continuous data, a bunch of photos, and out goes a continuous video where the"}, {"start": 43.22, "end": 46.54, "text": " AI feels in the data between these samples."}, {"start": 46.54, "end": 51.78, "text": " With this we can change the view direction, but only that."}, {"start": 51.78, "end": 55.019999999999996, "text": " This concept can also be used for other variables."}, {"start": 55.02, "end": 60.300000000000004, "text": " For instance, this work is able to change the lighting, but only the lighting."}, {"start": 60.300000000000004, "end": 66.46000000000001, "text": " By the way, this is from long ago from around two minute papers episode number 13, so our"}, {"start": 66.46000000000001, "end": 72.06, "text": " season fellow scholars know that this was almost 500 episodes ago."}, {"start": 72.06, "end": 75.7, "text": " Or the third potential variable is time."}, {"start": 75.7, "end": 80.62, "text": " With this AI-based physics simulator, we can advance the time and the algorithm would"}, {"start": 80.62, "end": 84.94, "text": " try to guess how a piece of fluid would evolve over time."}, {"start": 84.94, "end": 91.62, "text": " This was amazing, but as you might have guessed, we can advance time, but only the time."}, {"start": 91.62, "end": 95.86, "text": " And this was just a couple of examples from a slow of works that are capable of doing"}, {"start": 95.86, "end": 99.1, "text": " one or at most two of these."}, {"start": 99.1, "end": 104.42, "text": " These are all amazing techniques, but they offer separate features."}, {"start": 104.42, "end": 110.25999999999999, "text": " One can change the view, but nothing else, one for the illumination, but nothing else,"}, {"start": 110.25999999999999, "end": 113.18, "text": " and one for time, but nothing else."}, {"start": 113.18, "end": 118.38000000000001, "text": " To the advent of neural network based learning algorithms, I wonder if it is possible to create"}, {"start": 118.38000000000001, "end": 121.62, "text": " an algorithm that does all three."}, {"start": 121.62, "end": 123.82000000000001, "text": " Or is this just science fiction?"}, {"start": 123.82000000000001, "end": 129.42000000000002, "text": " Well, hold on to your papers because this new work that goes by the name X Fields, we"}, {"start": 129.42000000000002, "end": 137.66, "text": " can indeed change the time and the view direction and the lighting separately."}, {"start": 137.66, "end": 142.26000000000002, "text": " Or even better, do all three at the same time."}, {"start": 142.26, "end": 149.54, "text": " Look at how we can play with the time back and forth and set the fluid levels as we desire,"}, {"start": 149.54, "end": 154.66, "text": " that is the time part, and we can also play with the other two parameters as well at the"}, {"start": 154.66, "end": 156.38, "text": " same time."}, {"start": 156.38, "end": 162.98, "text": " But still, the results that we see here can range from absolutely amazing to trivial depending"}, {"start": 162.98, "end": 165.34, "text": " on just one factor."}, {"start": 165.34, "end": 170.1, "text": " And that factor is how much training data was available for the algorithm."}, {"start": 170.1, "end": 175.62, "text": " neural networks typically require loads of training data to learn a new concept."}, {"start": 175.62, "end": 181.06, "text": " For instance, if we wish to teach a neural network what a cat is, we have to show it thousands"}, {"start": 181.06, "end": 183.9, "text": " and thousands of images of cats."}, {"start": 183.9, "end": 187.26, "text": " So how much training data is needed for this?"}, {"start": 187.26, "end": 190.42, "text": " And now hold on to your papers and..."}, {"start": 190.42, "end": 191.74, "text": " Whoa!"}, {"start": 191.74, "end": 193.74, "text": " Look at these five dots here."}, {"start": 193.74, "end": 195.85999999999999, "text": " Do you know what this means?"}, {"start": 195.86, "end": 202.22000000000003, "text": " It means that all the AISR was five images, that is five samples from the scene with different"}, {"start": 202.22000000000003, "end": 208.02, "text": " light positions and it could fill in all the missing details with such accuracy that we"}, {"start": 208.02, "end": 211.10000000000002, "text": " can create this smooth and creamy transition."}, {"start": 211.10000000000002, "end": 216.06, "text": " It almost feels like we have made at least a hundred photographs of the scene."}, {"start": 216.06, "end": 219.18, "text": " And all this from five input photos."}, {"start": 219.18, "end": 220.66000000000003, "text": " Absolutely amazing."}, {"start": 220.66000000000003, "end": 224.06, "text": " Now, here is my other favorite example."}, {"start": 224.06, "end": 230.66, "text": " I am a light transport simulation researcher by trade, so by definition I love caustics."}, {"start": 230.66, "end": 236.58, "text": " A caustic is a beautiful phenomenon in nature where curved surfaces reflect or reflect"}, {"start": 236.58, "end": 240.62, "text": " light and concentrate it to a relatively small area."}, {"start": 240.62, "end": 245.54, "text": " I hope that you are not surprised when I say that it is the favorite phenomenon of most"}, {"start": 245.54, "end": 247.62, "text": " light transport researchers."}, {"start": 247.62, "end": 251.1, "text": " And just look at how beautifully it deals with it."}, {"start": 251.1, "end": 256.98, "text": " You can take any of these intermediate AI generated images and sell them as real ones and"}, {"start": 256.98, "end": 259.06, "text": " I doubt anyone would notice."}, {"start": 259.06, "end": 265.38, "text": " So, it does three things that previous techniques could do one by one, but really how does its"}, {"start": 265.38, "end": 268.58, "text": " quality compare to these previous methods?"}, {"start": 268.58, "end": 273.65999999999997, "text": " Let's see how it does on thin geometry which is a notoriously difficult case for these"}, {"start": 273.65999999999997, "end": 274.65999999999997, "text": " methods."}, {"start": 274.65999999999997, "end": 276.1, "text": " Here is a previous one."}, {"start": 276.1, "end": 279.62, "text": " Look, the thick part is reconstructed correctly."}, {"start": 279.62, "end": 282.9, "text": " However, look at the missing top of the grass blade."}, {"start": 282.9, "end": 284.66, "text": " Yep, that's gone."}, {"start": 284.66, "end": 290.18, "text": " A different previous technique by the name Local Lightfield Fusion not only missed the top"}, {"start": 290.18, "end": 295.54, "text": " as well, but also introduced halo-like artifacts to the scene."}, {"start": 295.54, "end": 300.18, "text": " And as you can see with this footage, the new method solves all of these problems really"}, {"start": 300.18, "end": 301.7, "text": " well."}, {"start": 301.7, "end": 306.78000000000003, "text": " And it is quite close to the true reference footage that we kept hidden from the AI."}, {"start": 306.78, "end": 312.41999999999996, "text": " Perhaps the best part is that it also has an online demo that you can try right now,"}, {"start": 312.41999999999996, "end": 316.02, "text": " so make sure to click the link in the video description to have a look."}, {"start": 316.02, "end": 320.61999999999995, "text": " Of course, not even this technique is perfect, there are cases where it might confuse the"}, {"start": 320.61999999999995, "end": 325.78, "text": " foreground with the background and we are still not out of the water when it comes to"}, {"start": 325.78, "end": 327.09999999999997, "text": " thin geometry."}, {"start": 327.09999999999997, "end": 332.34, "text": " Also, an extension that I would love to see is changing material properties."}, {"start": 332.34, "end": 337.34, "text": " Here, you see some results from our earlier paper on Neural Rendering where we can change"}, {"start": 337.34, "end": 343.82, "text": " the material properties of this test object and get a near perfect photorealistic image"}, {"start": 343.82, "end": 347.58, "text": " of it in about 5 milliseconds per image."}, {"start": 347.58, "end": 353.17999999999995, "text": " I would love to see it combined with a technique like this one and while it looks super challenging,"}, {"start": 353.17999999999995, "end": 357.94, "text": " it is easily possible that we will have something like that within two years."}, {"start": 357.94, "end": 362.58, "text": " The link to our Neural Rendering paper and its source code is also available in the video"}, {"start": 362.58, "end": 363.9, "text": " description."}, {"start": 363.9, "end": 365.62, "text": " What a time to be alive!"}, {"start": 365.62, "end": 370.14, "text": " What you see here is a report of this exact paper we have talked about which was made"}, {"start": 370.14, "end": 371.82, "text": " by Wates and Biasis."}, {"start": 371.82, "end": 376.14, "text": " I put a link to it in the description, make sure to have a look, I think it helps you"}, {"start": 376.14, "end": 378.74, "text": " understand this paper better."}, {"start": 378.74, "end": 383.3, "text": " Wates and Biasis provides tools to track your experiments in your deep learning projects."}, {"start": 383.3, "end": 388.06, "text": " Your system is designed to save you a ton of time and money and it is actively used"}, {"start": 388.06, "end": 394.26, "text": " in projects at prestigious labs such as OpenAI, Toyota Research, GitHub and more."}, {"start": 394.26, "end": 399.42, "text": " And the best part is that Wates and Biasis is free for all individuals, academics and"}, {"start": 399.42, "end": 400.90000000000003, "text": " open source projects."}, {"start": 400.90000000000003, "end": 403.58000000000004, "text": " It really is as good as it gets."}, {"start": 403.58000000000004, "end": 409.26, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video"}, {"start": 409.26, "end": 412.34000000000003, "text": " description and you can get a free demo today."}, {"start": 412.34, "end": 422.06, "text": " Thanks for watching and for your generous support and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=2wrOHdvAiNc
All Duckies Shall Pass! 🐣
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Interlinked SPH Pressure Solvers for Strong Fluid-Rigid Coupling" is available here: https://cg.informatik.uni-freiburg.de/publications/2019_TOG_strongCoupling_v1.pdf 📸 Our Instagram page is available here: https://www.instagram.com/twominutepapers/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir. In this paper, you will not only see an amazing technique for two-way coupled fluid solid simulations, but you will also see some of the most creative demonstrations of this new method I've seen in a while. But first things first, what is this two-way coupling thing? Two-way coupling in a fluid simulation means being able to process contact. You see, the armadillos can collide with the fluids, but the fluids are also allowed to move the armadillos. And this new work can compute this kind of contact. And by this, I mean lots and lots of contact. But one of the most important lessons of this paper is that we don't necessarily need a scene discrazy to be dominated by two-way coupling. Have a look at this experiment with these adorable duckies and below the propellers are starting up. Please don't be one of those graphics papers. Okay, good. We dodged this one. So the propellers are not here to dismember things. They are here to spin up to 160 RPM. And since they are two-way coupled, they pump water from one tank to the other, raising the water levels, allowing the duckies to pass. An excellent demonstration of a proper algorithm that can compute two-way coupling really well. And simulating this scene is much, much more challenging than we might think. Why is that? Note that the speed of the propellers is quite high, which is a huge challenge to previous methods. If we wish to complete the simulation in a reasonable amount of time, it simulates the interaction incorrectly and no ducks can pass. The no technique can simulate this correctly and not only that, but it is also 4.5 times faster than the previous method. Also, check out this elegant demonstration of two-way coupling. We start slowly unscrewing this bolt and nothing too crazy going on here. However, look, we have tiny cutouts in the bolt, allowing the water to start gushing out. The pipe was made transparent so we can track the water levels slowly decreasing. And finally, when the bolt falls out, we get some more two-way coupling action with the water. Once more, such a beautiful demonstration of a difficult to simulate phenomenon. Loving it. A traditional technique cannot simulate this properly, unless we add a lot of extra computation, at which point it is still unstable. Ouch! And even with more extra computation, we can finally do this, but hold onto your papers because the no proposed technique can do it about 10 times faster. It also supports contact-rich geometry as well. Look, we have a great deal going on here. You are seeing up to 38 million fluid particles interacting with these walls, given with lots of rich geometry, and there will be interactions with mud, and elastic trees as well. This can really do them all. And did you notice that throughout this video, we saw a lot of delta T's. What are those? Delta T is something that we call time step size. The smaller this number is, the tinier the time steps, with which we can advance the simulation when computing every interaction, and hence the more steps there are to compute. In simpler words, generally, time step size is an important factor in the computation time, and the smaller this is, the slower, but more accurate the simulation will be. This is why we needed to reduce the time steps by more than 30 times to get a stable simulation here with the previous method. And this paper proposes a technique that can get away with time steps that are typically from 10 times to 100 times larger than previous methods. And it is still stable. That is an incredible achievement. So what does that mean in a practical case? Well, hold on to your papers because this means that it is up to 58 times faster than previous methods. 58 times. Whoa! With a previous method, I would need to run something for nearly two months, and the new method would be able to compute the same within a day. Witchcraft, I'm telling you. What a time to be alive. Also, as usual, I couldn't resist creating a slow motion version of some of these videos, so if this is something that you wish to see, make sure to visit our Instagram page in the video description for more. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold on to your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zonai-Fehir."}, {"start": 4.64, "end": 11.200000000000001, "text": " In this paper, you will not only see an amazing technique for two-way coupled fluid solid simulations,"}, {"start": 11.200000000000001, "end": 17.44, "text": " but you will also see some of the most creative demonstrations of this new method I've seen in a while."}, {"start": 17.44, "end": 21.36, "text": " But first things first, what is this two-way coupling thing?"}, {"start": 21.36, "end": 26.72, "text": " Two-way coupling in a fluid simulation means being able to process contact."}, {"start": 26.72, "end": 34.32, "text": " You see, the armadillos can collide with the fluids, but the fluids are also allowed to move the armadillos."}, {"start": 34.32, "end": 37.6, "text": " And this new work can compute this kind of contact."}, {"start": 37.6, "end": 41.519999999999996, "text": " And by this, I mean lots and lots of contact."}, {"start": 41.519999999999996, "end": 47.28, "text": " But one of the most important lessons of this paper is that we don't necessarily need a scene"}, {"start": 47.28, "end": 50.72, "text": " discrazy to be dominated by two-way coupling."}, {"start": 50.72, "end": 56.96, "text": " Have a look at this experiment with these adorable duckies and below the propellers are starting up."}, {"start": 57.6, "end": 60.56, "text": " Please don't be one of those graphics papers."}, {"start": 62.48, "end": 65.03999999999999, "text": " Okay, good. We dodged this one."}, {"start": 65.03999999999999, "end": 68.08, "text": " So the propellers are not here to dismember things."}, {"start": 68.08, "end": 72.16, "text": " They are here to spin up to 160 RPM."}, {"start": 72.16, "end": 76.96000000000001, "text": " And since they are two-way coupled, they pump water from one tank to the other,"}, {"start": 76.96, "end": 80.55999999999999, "text": " raising the water levels, allowing the duckies to pass."}, {"start": 81.28, "end": 87.52, "text": " An excellent demonstration of a proper algorithm that can compute two-way coupling really well."}, {"start": 87.52, "end": 92.16, "text": " And simulating this scene is much, much more challenging than we might think."}, {"start": 92.8, "end": 94.0, "text": " Why is that?"}, {"start": 94.0, "end": 99.52, "text": " Note that the speed of the propellers is quite high, which is a huge challenge to previous methods."}, {"start": 100.16, "end": 103.83999999999999, "text": " If we wish to complete the simulation in a reasonable amount of time,"}, {"start": 103.84, "end": 108.72, "text": " it simulates the interaction incorrectly and no ducks can pass."}, {"start": 111.44, "end": 116.32000000000001, "text": " The no technique can simulate this correctly and not only that, but it is also"}, {"start": 116.32000000000001, "end": 119.2, "text": " 4.5 times faster than the previous method."}, {"start": 120.56, "end": 124.56, "text": " Also, check out this elegant demonstration of two-way coupling."}, {"start": 124.56, "end": 130.08, "text": " We start slowly unscrewing this bolt and nothing too crazy going on here."}, {"start": 130.08, "end": 138.24, "text": " However, look, we have tiny cutouts in the bolt, allowing the water to start gushing out."}, {"start": 138.24, "end": 143.76000000000002, "text": " The pipe was made transparent so we can track the water levels slowly decreasing."}, {"start": 143.76000000000002, "end": 151.28, "text": " And finally, when the bolt falls out, we get some more two-way coupling action with the water."}, {"start": 151.28, "end": 156.8, "text": " Once more, such a beautiful demonstration of a difficult to simulate phenomenon."}, {"start": 156.8, "end": 164.56, "text": " Loving it. A traditional technique cannot simulate this properly, unless we add a lot of extra"}, {"start": 164.56, "end": 171.76000000000002, "text": " computation, at which point it is still unstable. Ouch! And even with more extra computation,"}, {"start": 171.76000000000002, "end": 178.16000000000003, "text": " we can finally do this, but hold onto your papers because the no proposed technique can do it"}, {"start": 178.16000000000003, "end": 186.48000000000002, "text": " about 10 times faster. It also supports contact-rich geometry as well. Look, we have a great deal going"}, {"start": 186.48, "end": 192.79999999999998, "text": " on here. You are seeing up to 38 million fluid particles interacting with these walls,"}, {"start": 192.79999999999998, "end": 197.2, "text": " given with lots of rich geometry, and there will be interactions with mud,"}, {"start": 197.83999999999997, "end": 201.67999999999998, "text": " and elastic trees as well. This can really do them all."}, {"start": 202.39999999999998, "end": 208.56, "text": " And did you notice that throughout this video, we saw a lot of delta T's. What are those?"}, {"start": 209.6, "end": 215.83999999999997, "text": " Delta T is something that we call time step size. The smaller this number is, the tinier the"}, {"start": 215.84, "end": 220.88, "text": " time steps, with which we can advance the simulation when computing every interaction,"}, {"start": 220.88, "end": 227.2, "text": " and hence the more steps there are to compute. In simpler words, generally, time step size is"}, {"start": 227.2, "end": 233.68, "text": " an important factor in the computation time, and the smaller this is, the slower, but more accurate"}, {"start": 233.68, "end": 239.84, "text": " the simulation will be. This is why we needed to reduce the time steps by more than 30 times to"}, {"start": 239.84, "end": 245.52, "text": " get a stable simulation here with the previous method. And this paper proposes a technique that"}, {"start": 245.52, "end": 252.24, "text": " can get away with time steps that are typically from 10 times to 100 times larger than previous"}, {"start": 252.24, "end": 259.76, "text": " methods. And it is still stable. That is an incredible achievement. So what does that mean in a"}, {"start": 259.76, "end": 266.96000000000004, "text": " practical case? Well, hold on to your papers because this means that it is up to 58 times faster"}, {"start": 266.96000000000004, "end": 274.8, "text": " than previous methods. 58 times. Whoa! With a previous method, I would need to run something for"}, {"start": 274.8, "end": 280.16, "text": " nearly two months, and the new method would be able to compute the same within a day."}, {"start": 280.88, "end": 287.12, "text": " Witchcraft, I'm telling you. What a time to be alive. Also, as usual, I couldn't resist"}, {"start": 287.12, "end": 292.64, "text": " creating a slow motion version of some of these videos, so if this is something that you wish to see,"}, {"start": 292.64, "end": 296.16, "text": " make sure to visit our Instagram page in the video description for more."}, {"start": 296.88, "end": 303.04, "text": " This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs"}, {"start": 303.04, "end": 312.24, "text": " for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100"}, {"start": 312.24, "end": 319.76, "text": " instances, and hold on to your papers because Lambda GPU Cloud can cost less than half of AWS"}, {"start": 319.76, "end": 328.08000000000004, "text": " and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations"}, {"start": 328.08, "end": 334.4, "text": " like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers."}, {"start": 334.4, "end": 340.08, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU"}, {"start": 340.08, "end": 345.44, "text": " instances today. Our thanks to Lambda for their long-standing support and for helping us make"}, {"start": 345.44, "end": 359.28, "text": " better videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=nJ86LCA0Asw
Building A Liquid Labyrinth! 🌊
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The paper "Surface-Only Ferrofluids" is available here: http://computationalsciences.org/publications/huang-2020-ferrofluids.html You can follow this research group on Twitter too: https://twitter.com/csgKAUST 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #ferrofluid
Dear Fellow Scholars, this is two minute papers with Dr. Károly Zsolnai-Fehér. You are in for a real treat today, because today, once again, we are not going to simulate just plain regular fluids. No, we are going to simulate ferrofluids. These are fluids that have magnetic properties and respond to an external magnetic field and get this. They are even able to climb things. Look at this footage from a previous paper. Here is a legendary, real experiment where with magnetism, we can make a ferrofluid climb up on these steel helix. Look at that. And now, the simulation. Look at how closely it matches the real footage. Marvelous. Especially that it is hard to overstate how challenging it is to create an accurate simulation like this. And the paper got even better. This footage could even be used as proper teaching material. On this axis, you can see how the fluid disturbances get more pronounced as a response to a stronger magnetic field. And in this direction, you see how the effect of surface tension smooths out these shapes. What a visualization. The information density here is just out of this world while it is still so easy to read at a glance. And it is also absolutely beautiful. This paper was a true masterpiece. The first author of this work was Liboh Wang and it was advised by professor Dominik Michaz, who has a strong physics background. And here's the punchline, Liboh Wang is a PhD student and this was his first paper. Let me say it again. This was Liboh Wang's first paper and it is a masterpiece. Wow. And it gets better because this new paper is called Surface Only Faro Fluids and yes, it is from the same authors. So this paper is supposed to be better, but the previous technique set a really high bar. How the heck do you beat that? What more could we possibly ask for? Well, this new method showcases a surface only formulation and the key observation here is that for a class of ferrofluids, we don't have to compute how the magnetic forces act on the entirety of the 3D fluid domain, we only have to compute them on the surface of the model. So what does this give us? One of my favorite experiments. In this case, we squeeze the fluid between two glass planes and start cranking up the magnetic field strength perpendicular to these planes. Of course, we expect that it starts flowing sideways, but not at all how we would expect it. Wow. Look at how these beautiful fluid librarians start slowly forming. And we can simulate all this on our home computers today. We are truly living in a science fiction world. Now, if you find yourself missing the climbing experiment from the previous paper, don't despair, this can still do that too. Look, first we can control the movement of the fluid by turning on the upper magnet, then slowly turn it off while turning on the lower magnet to give rise to this beautiful climbing phenomenon. And that's not all. Fortunately, this work is also ample in amazing visualizations. For instance, this one shows how the ferrofluid changes if we crank up the strength of our magnets and how changing the surface tension determines the distance between the spikes and the overall smoothness of the fluid. What a time to be alive. One of the limitations of this technique is that it does not deal with viscosity well, so if we are looking to create a crazy goo simulation like this one, but with ferrofluids, we will need something else for that. Perhaps that something will be the next paper down the line. PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. And that's it, perceptilabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilabs for their support and for helping us make better videos for you. Thank you for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two minute papers with Dr. K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.6000000000000005, "end": 9.96, "text": " You are in for a real treat today, because today, once again, we are not going to simulate"}, {"start": 9.96, "end": 12.200000000000001, "text": " just plain regular fluids."}, {"start": 12.200000000000001, "end": 16.44, "text": " No, we are going to simulate ferrofluids."}, {"start": 16.44, "end": 21.84, "text": " These are fluids that have magnetic properties and respond to an external magnetic field and"}, {"start": 21.84, "end": 22.84, "text": " get this."}, {"start": 22.84, "end": 25.52, "text": " They are even able to climb things."}, {"start": 25.52, "end": 27.64, "text": " Look at this footage from a previous paper."}, {"start": 27.64, "end": 33.52, "text": " Here is a legendary, real experiment where with magnetism, we can make a ferrofluid"}, {"start": 33.52, "end": 36.32, "text": " climb up on these steel helix."}, {"start": 36.32, "end": 37.68, "text": " Look at that."}, {"start": 37.68, "end": 40.08, "text": " And now, the simulation."}, {"start": 40.08, "end": 43.28, "text": " Look at how closely it matches the real footage."}, {"start": 43.28, "end": 44.6, "text": " Marvelous."}, {"start": 44.6, "end": 50.0, "text": " Especially that it is hard to overstate how challenging it is to create an accurate simulation"}, {"start": 50.0, "end": 52.0, "text": " like this."}, {"start": 52.0, "end": 54.36, "text": " And the paper got even better."}, {"start": 54.36, "end": 58.24, "text": " This footage could even be used as proper teaching material."}, {"start": 58.24, "end": 64.32, "text": " On this axis, you can see how the fluid disturbances get more pronounced as a response to a stronger"}, {"start": 64.32, "end": 66.12, "text": " magnetic field."}, {"start": 66.12, "end": 72.76, "text": " And in this direction, you see how the effect of surface tension smooths out these shapes."}, {"start": 72.76, "end": 74.24, "text": " What a visualization."}, {"start": 74.24, "end": 80.0, "text": " The information density here is just out of this world while it is still so easy to read"}, {"start": 80.0, "end": 81.36, "text": " at a glance."}, {"start": 81.36, "end": 84.68, "text": " And it is also absolutely beautiful."}, {"start": 84.68, "end": 87.28, "text": " This paper was a true masterpiece."}, {"start": 87.28, "end": 92.4, "text": " The first author of this work was Liboh Wang and it was advised by professor Dominik"}, {"start": 92.4, "end": 95.6, "text": " Michaz, who has a strong physics background."}, {"start": 95.6, "end": 102.0, "text": " And here's the punchline, Liboh Wang is a PhD student and this was his first paper."}, {"start": 102.0, "end": 103.0, "text": " Let me say it again."}, {"start": 103.0, "end": 108.2, "text": " This was Liboh Wang's first paper and it is a masterpiece."}, {"start": 108.2, "end": 109.28, "text": " Wow."}, {"start": 109.28, "end": 115.88, "text": " And it gets better because this new paper is called Surface Only Faro Fluids and yes,"}, {"start": 115.88, "end": 118.52, "text": " it is from the same authors."}, {"start": 118.52, "end": 123.48, "text": " So this paper is supposed to be better, but the previous technique set a really high"}, {"start": 123.48, "end": 124.48, "text": " bar."}, {"start": 124.48, "end": 126.44, "text": " How the heck do you beat that?"}, {"start": 126.44, "end": 128.8, "text": " What more could we possibly ask for?"}, {"start": 128.8, "end": 134.4, "text": " Well, this new method showcases a surface only formulation and the key observation here"}, {"start": 134.4, "end": 139.0, "text": " is that for a class of ferrofluids, we don't have to compute how the magnetic forces"}, {"start": 139.0, "end": 144.84, "text": " act on the entirety of the 3D fluid domain, we only have to compute them on the surface"}, {"start": 144.84, "end": 146.24, "text": " of the model."}, {"start": 146.24, "end": 148.76, "text": " So what does this give us?"}, {"start": 148.76, "end": 150.92, "text": " One of my favorite experiments."}, {"start": 150.92, "end": 156.52, "text": " In this case, we squeeze the fluid between two glass planes and start cranking up the magnetic"}, {"start": 156.52, "end": 159.72, "text": " field strength perpendicular to these planes."}, {"start": 159.72, "end": 165.6, "text": " Of course, we expect that it starts flowing sideways, but not at all how we would expect"}, {"start": 165.6, "end": 166.6, "text": " it."}, {"start": 166.6, "end": 167.6, "text": " Wow."}, {"start": 167.6, "end": 173.2, "text": " Look at how these beautiful fluid librarians start slowly forming."}, {"start": 173.2, "end": 177.16, "text": " And we can simulate all this on our home computers today."}, {"start": 177.16, "end": 180.35999999999999, "text": " We are truly living in a science fiction world."}, {"start": 180.35999999999999, "end": 186.16, "text": " Now, if you find yourself missing the climbing experiment from the previous paper, don't"}, {"start": 186.16, "end": 188.6, "text": " despair, this can still do that too."}, {"start": 188.6, "end": 195.24, "text": " Look, first we can control the movement of the fluid by turning on the upper magnet, then"}, {"start": 195.24, "end": 201.24, "text": " slowly turn it off while turning on the lower magnet to give rise to this beautiful climbing"}, {"start": 201.24, "end": 202.48000000000002, "text": " phenomenon."}, {"start": 202.48000000000002, "end": 203.48000000000002, "text": " And that's not all."}, {"start": 203.48000000000002, "end": 207.92000000000002, "text": " Fortunately, this work is also ample in amazing visualizations."}, {"start": 207.92000000000002, "end": 213.16000000000003, "text": " For instance, this one shows how the ferrofluid changes if we crank up the strength of our"}, {"start": 213.16000000000003, "end": 219.24, "text": " magnets and how changing the surface tension determines the distance between the spikes"}, {"start": 219.24, "end": 222.28, "text": " and the overall smoothness of the fluid."}, {"start": 222.28, "end": 224.20000000000002, "text": " What a time to be alive."}, {"start": 224.2, "end": 228.72, "text": " One of the limitations of this technique is that it does not deal with viscosity well,"}, {"start": 228.72, "end": 234.67999999999998, "text": " so if we are looking to create a crazy goo simulation like this one, but with ferrofluids,"}, {"start": 234.67999999999998, "end": 237.04, "text": " we will need something else for that."}, {"start": 237.04, "end": 240.79999999999998, "text": " Perhaps that something will be the next paper down the line."}, {"start": 240.79999999999998, "end": 246.07999999999998, "text": " PerceptiLabs is a visual API for TensorFlow carefully designed to make machine learning"}, {"start": 246.07999999999998, "end": 248.39999999999998, "text": " as intuitive as possible."}, {"start": 248.39999999999998, "end": 253.32, "text": " This gives you a faster way to build out models with more transparency into how your model"}, {"start": 253.32, "end": 257.59999999999997, "text": " is architected, how it performs, and how to debug it."}, {"start": 257.59999999999997, "end": 262.24, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 262.24, "end": 267.28, "text": " It even generates visualizations for all the model variables and gives you recommendations"}, {"start": 267.28, "end": 271.96, "text": " both during modeling and training and does all this automatically."}, {"start": 271.96, "end": 276.52, "text": " I only wish I had a tool like this when I was working on my neural networks during my"}, {"start": 276.52, "end": 278.15999999999997, "text": " PhD years."}, {"start": 278.16, "end": 283.20000000000005, "text": " And that's it, perceptilabs.com slash papers to easily install the free local version of"}, {"start": 283.20000000000005, "end": 284.68, "text": " their system today."}, {"start": 284.68, "end": 289.36, "text": " Our thanks to perceptilabs for their support and for helping us make better videos for"}, {"start": 289.36, "end": 290.36, "text": " you."}, {"start": 290.36, "end": 318.88, "text": " Thank you for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=C7D5EzkhT6A
OpenAI DALL-E: Fighter Jet For The Mind! ✈️
❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers 📝 The blog post on "DALL-E: Creating Images from Text" is available here: https://openai.com/blog/dall-e/ Tweet sources: - Code completion: https://twitter.com/gdm3000/status/1151469462614368256 - Website layout: https://twitter.com/sharifshameem/status/1283322990625607681 - Population data: https://twitter.com/pavtalk/status/1285410751092416513 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credits: https://pixabay.com/images/id-3202725/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #openai #dalle #dalle2
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. In early 2019, a learning-based technique appeared that could perform common natural language processing operations, for instance, answering questions, completing text, reading comprehension, summarization, and more. This method was developed by scientists at OpenAI and they called it GPT2. The key idea for GPT2 was that all of these problems could be formulated as different variants of text completion problems where all we need to do is provide it an incomplete piece of text and it would try to finish it. Then in June 2020 came GPT3 that supercharged this idea and among many incredible examples it could generate website layouts from a written description. However, no one said that these neural networks can only deal with text information and, sure enough, a few months later, scientists at OpenAI thought that if we can complete text sentences, why not try to complete images too? They called this project image GPT and the problem statement was simple. We give it an incomplete image and we ask the AI to fill in the missing pixels. That could identify that the cat here likely holds a piece of paper and finish the picture accordingly and even understood that if we have a droplet here and we see just a portion of the ripples, then this means a splash must be filled in. And now, right in January 2021, just seven months after the release of GPT3, here is their new mind-blowing technique that explores the connection between text and images. But finishing images already kind of works, so what new thing can it do? In just a few moments you will see that the more appropriate question would be, what can it do? For now, well, it creates images from our written text captions and you will see in a moment how monumental of a challenge that is. The name of this technique is a mix of Salvador Dalí and Pixar's Wally. So please meet Dalí. And now let's see it through an example. For neural network-based learning methods, it is easy to recognize that this text says OpenAI and what a storefront is. Images of both of these exist in abundance. Understanding that is simple. However, generating a storefront that says OpenAI is quite a challenge. Is it really possible that it can do that? Well, let's try it. Look, it works. Wow! Now, of course, if you look here, you immediately see that it is by no means perfect, but let's marvel at the fact that we can get all kinds of 2D and 3D text. Look at the storefronts from different orientations and it can deal with all of these cases reasonably well. And of course, it is not limited to storefronts. We can request license plates, bags of chips, neon signs and more. It can really do all that. So what else? Well, get this. It can also kind of invent new things. So let's put our entrepreneurial hat on and try to invent something here. For instance, let's try to create a triangular clock. Or pentagonal. Or you know, just make it a hexagon. It really doesn't matter because we can ask for absolutely anything and get a bunch of prototypes in a matter of seconds. Now let's make it white and look. Now we have a happy, happy caroy. Why is that? It is because I am a light transport researcher by trade, so the first thing I look at when seeing these generated images is how physically plausible they are. For instance, look at this white clock here on the blue table. And it not only put it on the table, but it also made sure to generate appropriate glossary reflections that matches the color of the clock. It can do this too. Loving it. Apparently, it understands geometry, shapes and materials. I wonder what else does it understand? Well, get this. For instance, it even understands styles and rendering techniques. Being a graphics person, I am so happy to see that it learned the concept of low polygon count rendering, isometric views, clay objects, and we can even add an x-ray view to the owl. Kind of. And now, if all that wasn't enough, hold on to your papers because we can also commission artistic illustrations for free, and not only that, but even have fine-grained control over these artistic illustrations. I also learned that if manatees wore suits, they would wear them like this, and after a long and strenuous day walking their dogs, they can go for yet another round in pajamas. But it does not stop there. That can not only generate paintings of nearly anything, but we can even choose the artistic style and the time of day as well. The night images are a little on the nose, as most of them have the moon in the background, but I'll be more than happy to take these. And the best part is that you can try it yourself right now through the link in the video description. In general, not all results are perfect, but it's hard to even fathom all the things this will enable us to do in the near future when we can get our hands on these pre-trained models. And this may be the first technique where the results are not limited by the algorithm, but by our own imagination. Now this is a quote that I said about GPT-3, and notice that the exact same thing can be said about Dolly. Quote, the main point is that working with GPT-3 is a really peculiar process where we know that a vast body of knowledge lies within, but it only emerges if we can bring it out with properly written prompts. It almost feels like a new kind of programming that is open to everyone, even people without any programming or technical knowledge. If a computer is a bicycle for the mind, then GPT-3 is a fighter jet. Absolutely incredible. And I think this kind of programming is going to be more and more common in the future. Now, note that these are some amazing preliminary results, but the full paper is not available yet. So this was not two minutes, and it was not about a paper. Welcome to two-minute papers. Jokes aside, I cannot wait for the paper to appear and I'll be here to have a closer look whenever it happens. Make sure to subscribe and hit the bell icon to not miss it when the big day comes. Until then, let me know in the comments what crazy concoctions you came up with. PerceptiLabs is a visual API for TensorFlow, carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptiLabs.com slash papers to easily install the free local version of their system today. Our thanks to perceptiLabs for their support and for helping us make better videos for you. Watch again for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.8, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.8, "end": 10.44, "text": " In early 2019, a learning-based technique appeared that could perform common natural language"}, {"start": 10.44, "end": 17.52, "text": " processing operations, for instance, answering questions, completing text, reading comprehension,"}, {"start": 17.52, "end": 20.0, "text": " summarization, and more."}, {"start": 20.0, "end": 25.88, "text": " This method was developed by scientists at OpenAI and they called it GPT2."}, {"start": 25.88, "end": 31.16, "text": " The key idea for GPT2 was that all of these problems could be formulated as different"}, {"start": 31.16, "end": 36.68, "text": " variants of text completion problems where all we need to do is provide it an incomplete"}, {"start": 36.68, "end": 40.28, "text": " piece of text and it would try to finish it."}, {"start": 40.28, "end": 48.0, "text": " Then in June 2020 came GPT3 that supercharged this idea and among many incredible examples"}, {"start": 48.0, "end": 51.879999999999995, "text": " it could generate website layouts from a written description."}, {"start": 51.88, "end": 58.88, "text": " However, no one said that these neural networks can only deal with text information and, sure"}, {"start": 58.88, "end": 66.2, "text": " enough, a few months later, scientists at OpenAI thought that if we can complete text sentences,"}, {"start": 66.2, "end": 69.2, "text": " why not try to complete images too?"}, {"start": 69.2, "end": 74.16, "text": " They called this project image GPT and the problem statement was simple."}, {"start": 74.16, "end": 80.04, "text": " We give it an incomplete image and we ask the AI to fill in the missing pixels."}, {"start": 80.04, "end": 85.56, "text": " That could identify that the cat here likely holds a piece of paper and finish the picture"}, {"start": 85.56, "end": 92.48, "text": " accordingly and even understood that if we have a droplet here and we see just a portion"}, {"start": 92.48, "end": 97.12, "text": " of the ripples, then this means a splash must be filled in."}, {"start": 97.12, "end": 104.60000000000001, "text": " And now, right in January 2021, just seven months after the release of GPT3, here is"}, {"start": 104.6, "end": 110.64, "text": " their new mind-blowing technique that explores the connection between text and images."}, {"start": 110.64, "end": 116.75999999999999, "text": " But finishing images already kind of works, so what new thing can it do?"}, {"start": 116.75999999999999, "end": 121.19999999999999, "text": " In just a few moments you will see that the more appropriate question would be, what"}, {"start": 121.19999999999999, "end": 123.03999999999999, "text": " can it do?"}, {"start": 123.03999999999999, "end": 128.56, "text": " For now, well, it creates images from our written text captions and you will see in a moment"}, {"start": 128.56, "end": 131.44, "text": " how monumental of a challenge that is."}, {"start": 131.44, "end": 136.88, "text": " The name of this technique is a mix of Salvador Dal\u00ed and Pixar's Wally."}, {"start": 136.88, "end": 139.32, "text": " So please meet Dal\u00ed."}, {"start": 139.32, "end": 142.07999999999998, "text": " And now let's see it through an example."}, {"start": 142.07999999999998, "end": 146.84, "text": " For neural network-based learning methods, it is easy to recognize that this text says"}, {"start": 146.84, "end": 150.76, "text": " OpenAI and what a storefront is."}, {"start": 150.76, "end": 154.28, "text": " Images of both of these exist in abundance."}, {"start": 154.28, "end": 156.04, "text": " Understanding that is simple."}, {"start": 156.04, "end": 161.84, "text": " However, generating a storefront that says OpenAI is quite a challenge."}, {"start": 161.84, "end": 164.6, "text": " Is it really possible that it can do that?"}, {"start": 164.6, "end": 166.6, "text": " Well, let's try it."}, {"start": 166.6, "end": 168.79999999999998, "text": " Look, it works."}, {"start": 168.79999999999998, "end": 169.79999999999998, "text": " Wow!"}, {"start": 169.79999999999998, "end": 176.16, "text": " Now, of course, if you look here, you immediately see that it is by no means perfect, but let's"}, {"start": 176.16, "end": 181.12, "text": " marvel at the fact that we can get all kinds of 2D and 3D text."}, {"start": 181.12, "end": 186.8, "text": " Look at the storefronts from different orientations and it can deal with all of these cases reasonably"}, {"start": 186.8, "end": 187.8, "text": " well."}, {"start": 187.8, "end": 190.12, "text": " And of course, it is not limited to storefronts."}, {"start": 190.12, "end": 200.52, "text": " We can request license plates, bags of chips, neon signs and more."}, {"start": 200.52, "end": 202.92000000000002, "text": " It can really do all that."}, {"start": 202.92000000000002, "end": 204.68, "text": " So what else?"}, {"start": 204.68, "end": 205.84, "text": " Well, get this."}, {"start": 205.84, "end": 209.64000000000001, "text": " It can also kind of invent new things."}, {"start": 209.64, "end": 214.48, "text": " So let's put our entrepreneurial hat on and try to invent something here."}, {"start": 214.48, "end": 218.35999999999999, "text": " For instance, let's try to create a triangular clock."}, {"start": 218.35999999999999, "end": 220.44, "text": " Or pentagonal."}, {"start": 220.44, "end": 223.39999999999998, "text": " Or you know, just make it a hexagon."}, {"start": 223.39999999999998, "end": 228.11999999999998, "text": " It really doesn't matter because we can ask for absolutely anything and get a bunch"}, {"start": 228.11999999999998, "end": 231.23999999999998, "text": " of prototypes in a matter of seconds."}, {"start": 231.23999999999998, "end": 234.79999999999998, "text": " Now let's make it white and look."}, {"start": 234.79999999999998, "end": 238.0, "text": " Now we have a happy, happy caroy."}, {"start": 238.0, "end": 239.35999999999999, "text": " Why is that?"}, {"start": 239.36, "end": 244.28, "text": " It is because I am a light transport researcher by trade, so the first thing I look at when"}, {"start": 244.28, "end": 248.68, "text": " seeing these generated images is how physically plausible they are."}, {"start": 248.68, "end": 252.44000000000003, "text": " For instance, look at this white clock here on the blue table."}, {"start": 252.44000000000003, "end": 258.0, "text": " And it not only put it on the table, but it also made sure to generate appropriate glossary"}, {"start": 258.0, "end": 261.64, "text": " reflections that matches the color of the clock."}, {"start": 261.64, "end": 263.2, "text": " It can do this too."}, {"start": 263.2, "end": 264.2, "text": " Loving it."}, {"start": 264.2, "end": 268.88, "text": " Apparently, it understands geometry, shapes and materials."}, {"start": 268.88, "end": 271.88, "text": " I wonder what else does it understand?"}, {"start": 271.88, "end": 273.52, "text": " Well, get this."}, {"start": 273.52, "end": 278.71999999999997, "text": " For instance, it even understands styles and rendering techniques."}, {"start": 278.71999999999997, "end": 283.71999999999997, "text": " Being a graphics person, I am so happy to see that it learned the concept of low polygon"}, {"start": 283.71999999999997, "end": 294.84, "text": " count rendering, isometric views, clay objects, and we can even add an x-ray view to the owl."}, {"start": 294.84, "end": 295.84, "text": " Kind of."}, {"start": 295.84, "end": 301.15999999999997, "text": " And now, if all that wasn't enough, hold on to your papers because we can also commission"}, {"start": 301.15999999999997, "end": 307.44, "text": " artistic illustrations for free, and not only that, but even have fine-grained control"}, {"start": 307.44, "end": 310.12, "text": " over these artistic illustrations."}, {"start": 310.12, "end": 315.71999999999997, "text": " I also learned that if manatees wore suits, they would wear them like this, and after"}, {"start": 315.71999999999997, "end": 322.79999999999995, "text": " a long and strenuous day walking their dogs, they can go for yet another round in pajamas."}, {"start": 322.79999999999995, "end": 324.67999999999995, "text": " But it does not stop there."}, {"start": 324.68, "end": 329.88, "text": " That can not only generate paintings of nearly anything, but we can even choose the artistic"}, {"start": 329.88, "end": 333.40000000000003, "text": " style and the time of day as well."}, {"start": 333.40000000000003, "end": 338.36, "text": " The night images are a little on the nose, as most of them have the moon in the background,"}, {"start": 338.36, "end": 340.92, "text": " but I'll be more than happy to take these."}, {"start": 340.92, "end": 345.68, "text": " And the best part is that you can try it yourself right now through the link in the video"}, {"start": 345.68, "end": 346.88, "text": " description."}, {"start": 346.88, "end": 352.28000000000003, "text": " In general, not all results are perfect, but it's hard to even fathom all the things"}, {"start": 352.28, "end": 356.84, "text": " this will enable us to do in the near future when we can get our hands on these pre-trained"}, {"start": 356.84, "end": 357.84, "text": " models."}, {"start": 357.84, "end": 362.2, "text": " And this may be the first technique where the results are not limited by the algorithm,"}, {"start": 362.2, "end": 364.67999999999995, "text": " but by our own imagination."}, {"start": 364.67999999999995, "end": 370.11999999999995, "text": " Now this is a quote that I said about GPT-3, and notice that the exact same thing can"}, {"start": 370.11999999999995, "end": 372.11999999999995, "text": " be said about Dolly."}, {"start": 372.11999999999995, "end": 377.96, "text": " Quote, the main point is that working with GPT-3 is a really peculiar process where we know"}, {"start": 377.96, "end": 383.76, "text": " that a vast body of knowledge lies within, but it only emerges if we can bring it out"}, {"start": 383.76, "end": 385.91999999999996, "text": " with properly written prompts."}, {"start": 385.91999999999996, "end": 391.15999999999997, "text": " It almost feels like a new kind of programming that is open to everyone, even people without"}, {"start": 391.15999999999997, "end": 394.15999999999997, "text": " any programming or technical knowledge."}, {"start": 394.15999999999997, "end": 399.91999999999996, "text": " If a computer is a bicycle for the mind, then GPT-3 is a fighter jet."}, {"start": 399.91999999999996, "end": 401.64, "text": " Absolutely incredible."}, {"start": 401.64, "end": 406.47999999999996, "text": " And I think this kind of programming is going to be more and more common in the future."}, {"start": 406.48, "end": 412.6, "text": " Now, note that these are some amazing preliminary results, but the full paper is not available"}, {"start": 412.6, "end": 413.6, "text": " yet."}, {"start": 413.6, "end": 417.8, "text": " So this was not two minutes, and it was not about a paper."}, {"start": 417.8, "end": 419.8, "text": " Welcome to two-minute papers."}, {"start": 419.8, "end": 424.36, "text": " Jokes aside, I cannot wait for the paper to appear and I'll be here to have a closer"}, {"start": 424.36, "end": 426.40000000000003, "text": " look whenever it happens."}, {"start": 426.40000000000003, "end": 431.16, "text": " Make sure to subscribe and hit the bell icon to not miss it when the big day comes."}, {"start": 431.16, "end": 435.88, "text": " Until then, let me know in the comments what crazy concoctions you came up with."}, {"start": 435.88, "end": 441.15999999999997, "text": " PerceptiLabs is a visual API for TensorFlow, carefully designed to make machine learning"}, {"start": 441.15999999999997, "end": 443.48, "text": " as intuitive as possible."}, {"start": 443.48, "end": 448.4, "text": " This gives you a faster way to build out models with more transparency into how your model"}, {"start": 448.4, "end": 452.64, "text": " is architected, how it performs, and how to debug it."}, {"start": 452.64, "end": 457.32, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 457.32, "end": 462.4, "text": " It even generates visualizations for all the model variables and gives you recommendations"}, {"start": 462.4, "end": 467.03999999999996, "text": " both during modeling and training and does all this automatically."}, {"start": 467.03999999999996, "end": 471.64, "text": " I only wish I had a tool like this when I was working on my neural networks during my"}, {"start": 471.64, "end": 473.12, "text": " PhD years."}, {"start": 473.12, "end": 478.88, "text": " Visit perceptiLabs.com slash papers to easily install the free local version of their system"}, {"start": 478.88, "end": 479.88, "text": " today."}, {"start": 479.88, "end": 484.44, "text": " Our thanks to perceptiLabs for their support and for helping us make better videos for"}, {"start": 484.44, "end": 485.44, "text": " you."}, {"start": 485.44, "end": 495.04, "text": " Watch again for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9XM5-CJzrU0
Light Fields - Videos From The Future! 📸
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Immersive Light Field Video with a Layered Mesh Representation" is available here: https://augmentedperception.github.io/deepviewvideo/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #lightfields
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifeher. Whenever we take a photo, we capture a piece of reality from one viewpoint. Or if we have multiple cameras on our smartphone, a few viewpoints at most. In an earlier episode, we explored how to upgrade these to 3D photos where we could kind of look behind the person. I am saying kind of because what we see here is not reality. This is statistical data that is filled in by an algorithm to match its surroundings which we refer to as image in painting. So strictly speaking, it is likely information, but not necessarily true information. And also, we can recognize the synthetic parts of the image as they are significantly blurrier. So the question naturally arises in the mind of the curious scholar, how about actually looking behind the person? Is that somehow possible or is that still science fiction? Well, hold on to your papers because this technique shows us the images of the future by sticking a bunch of cameras onto a spherical shell. And when we capture a video, it will see something like this. And the goal is to untangle this mess, and we are not done yet. We also need to reconstruct the geometry of the scene as if the video was captured from many different viewpoints at the same time. Absolutely amazing. And yes, this means that we can change our viewpoint while the video is running. Since it is doing the reconstruction in layers, we know how far each object is in these scenes enabling us to rotate these sparks and flames and look at them in 3D. Yum! Now, I am a light transport researcher by trade, so I hope you can tell that I am very happy about these beautiful volumetric effects, but I would also love to know how it deals with reflective surfaces. Let's see together. Look at the reflections in the sand here, and I'll add a lot of camera movement. Wow, this thing works. It really works. And it does not break a sweat even if we try a more reflective surface or an even more reflective surface. This is as reflective as it gets I'm afraid, and we still get a consistent and crisp image in the mirror. Bravo! Alright, let's get a little more greedy. What about seeing through thin fences? That is quite a challenge. And look at the tailwax there. This is still a touch blurrier here and there, but overall, very impressive. So what do we do with a video like this? Well, we can use our mouse to look around within the photo in our web browser, you can try this yourself right now by clicking on the paper in the video description. Make sure to follow the instructions if you do. Or we can make the viewing experience even more immersive to the head mounted display, where, of course, the image will follow wherever we turn our head. Both of these truly feel like entering a photograph and getting a feel of the room therein. Loving it. Now, since there is a lot of information in these light field videos, it also needs a powerful internet connection to relay them. And when using h.265, a powerful video compression standard, we are talking in the order of hundreds of megabits. It is like streaming several videos in 4K resolution at the same time. Compression helps, however, we also have to make sure that we don't compress too much, so that compression artifacts don't eat the content behind thin geometry, or at least not too much. I bet this will be an interesting topic for a follow-up paper, so make sure to subscribe and hit the bell icon to not miss it when it appears. And for now, more practical light field photos and videos will be available that allow us to almost feel like we are really in the room with the subjects of the videos. What a time to be alive! This episode has been supported by Lambda GPU Cloud. If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB RTX 8000. And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.98, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifeher."}, {"start": 4.98, "end": 10.540000000000001, "text": " Whenever we take a photo, we capture a piece of reality from one viewpoint."}, {"start": 10.540000000000001, "end": 15.780000000000001, "text": " Or if we have multiple cameras on our smartphone, a few viewpoints at most."}, {"start": 15.780000000000001, "end": 21.6, "text": " In an earlier episode, we explored how to upgrade these to 3D photos where we could kind"}, {"start": 21.6, "end": 23.8, "text": " of look behind the person."}, {"start": 23.8, "end": 30.32, "text": " I am saying kind of because what we see here is not reality. This is statistical data"}, {"start": 30.32, "end": 35.64, "text": " that is filled in by an algorithm to match its surroundings which we refer to as image"}, {"start": 35.64, "end": 37.16, "text": " in painting."}, {"start": 37.16, "end": 44.08, "text": " So strictly speaking, it is likely information, but not necessarily true information."}, {"start": 44.08, "end": 49.08, "text": " And also, we can recognize the synthetic parts of the image as they are significantly"}, {"start": 49.08, "end": 50.08, "text": " blurrier."}, {"start": 50.08, "end": 56.1, "text": " So the question naturally arises in the mind of the curious scholar, how about actually"}, {"start": 56.1, "end": 58.28, "text": " looking behind the person?"}, {"start": 58.28, "end": 62.2, "text": " Is that somehow possible or is that still science fiction?"}, {"start": 62.2, "end": 67.6, "text": " Well, hold on to your papers because this technique shows us the images of the future"}, {"start": 67.6, "end": 72.64, "text": " by sticking a bunch of cameras onto a spherical shell."}, {"start": 72.64, "end": 77.88, "text": " And when we capture a video, it will see something like this."}, {"start": 77.88, "end": 82.19999999999999, "text": " And the goal is to untangle this mess, and we are not done yet."}, {"start": 82.19999999999999, "end": 87.6, "text": " We also need to reconstruct the geometry of the scene as if the video was captured from"}, {"start": 87.6, "end": 91.16, "text": " many different viewpoints at the same time."}, {"start": 91.16, "end": 93.36, "text": " Absolutely amazing."}, {"start": 93.36, "end": 100.0, "text": " And yes, this means that we can change our viewpoint while the video is running."}, {"start": 100.0, "end": 105.44, "text": " Since it is doing the reconstruction in layers, we know how far each object is in these"}, {"start": 105.44, "end": 112.03999999999999, "text": " scenes enabling us to rotate these sparks and flames and look at them in 3D."}, {"start": 112.03999999999999, "end": 113.03999999999999, "text": " Yum!"}, {"start": 113.03999999999999, "end": 118.64, "text": " Now, I am a light transport researcher by trade, so I hope you can tell that I am very happy"}, {"start": 118.64, "end": 124.64, "text": " about these beautiful volumetric effects, but I would also love to know how it deals with"}, {"start": 124.64, "end": 126.0, "text": " reflective surfaces."}, {"start": 126.0, "end": 128.32, "text": " Let's see together."}, {"start": 128.32, "end": 133.16, "text": " Look at the reflections in the sand here, and I'll add a lot of camera movement."}, {"start": 133.16, "end": 136.56, "text": " Wow, this thing works."}, {"start": 136.56, "end": 138.24, "text": " It really works."}, {"start": 138.24, "end": 144.4, "text": " And it does not break a sweat even if we try a more reflective surface or an even more"}, {"start": 144.4, "end": 146.16, "text": " reflective surface."}, {"start": 146.16, "end": 152.16, "text": " This is as reflective as it gets I'm afraid, and we still get a consistent and crisp image"}, {"start": 152.16, "end": 153.16, "text": " in the mirror."}, {"start": 153.16, "end": 154.16, "text": " Bravo!"}, {"start": 154.16, "end": 158.0, "text": " Alright, let's get a little more greedy."}, {"start": 158.0, "end": 160.8, "text": " What about seeing through thin fences?"}, {"start": 160.8, "end": 163.8, "text": " That is quite a challenge."}, {"start": 163.8, "end": 166.36, "text": " And look at the tailwax there."}, {"start": 166.36, "end": 171.76000000000002, "text": " This is still a touch blurrier here and there, but overall, very impressive."}, {"start": 171.76000000000002, "end": 174.64000000000001, "text": " So what do we do with a video like this?"}, {"start": 174.64000000000001, "end": 179.64000000000001, "text": " Well, we can use our mouse to look around within the photo in our web browser, you can"}, {"start": 179.64000000000001, "end": 184.60000000000002, "text": " try this yourself right now by clicking on the paper in the video description."}, {"start": 184.60000000000002, "end": 187.24, "text": " Make sure to follow the instructions if you do."}, {"start": 187.24, "end": 192.68, "text": " Or we can make the viewing experience even more immersive to the head mounted display,"}, {"start": 192.68, "end": 197.36, "text": " where, of course, the image will follow wherever we turn our head."}, {"start": 197.36, "end": 203.04000000000002, "text": " Both of these truly feel like entering a photograph and getting a feel of the room therein."}, {"start": 203.04000000000002, "end": 204.04000000000002, "text": " Loving it."}, {"start": 204.04000000000002, "end": 209.36, "text": " Now, since there is a lot of information in these light field videos, it also needs a powerful"}, {"start": 209.36, "end": 212.04000000000002, "text": " internet connection to relay them."}, {"start": 212.04, "end": 218.76, "text": " And when using h.265, a powerful video compression standard, we are talking in the order of hundreds"}, {"start": 218.76, "end": 219.76, "text": " of megabits."}, {"start": 219.76, "end": 226.0, "text": " It is like streaming several videos in 4K resolution at the same time."}, {"start": 226.0, "end": 231.04, "text": " Compression helps, however, we also have to make sure that we don't compress too much,"}, {"start": 231.04, "end": 236.39999999999998, "text": " so that compression artifacts don't eat the content behind thin geometry, or at least"}, {"start": 236.39999999999998, "end": 237.79999999999998, "text": " not too much."}, {"start": 237.8, "end": 242.84, "text": " I bet this will be an interesting topic for a follow-up paper, so make sure to subscribe"}, {"start": 242.84, "end": 246.48000000000002, "text": " and hit the bell icon to not miss it when it appears."}, {"start": 246.48000000000002, "end": 251.52, "text": " And for now, more practical light field photos and videos will be available that allow us"}, {"start": 251.52, "end": 256.48, "text": " to almost feel like we are really in the room with the subjects of the videos."}, {"start": 256.48, "end": 258.36, "text": " What a time to be alive!"}, {"start": 258.36, "end": 261.8, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 261.8, "end": 267.76, "text": " If you are looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 267.76, "end": 275.64, "text": " They recently launched Quadro RTX 6000, RTX 8000, and V100 instances, and hold onto your"}, {"start": 275.64, "end": 282.12, "text": " papers because Lambda GPU Cloud can cost less than half of AWS and Azure."}, {"start": 282.12, "end": 287.64, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 287.64, "end": 294.03999999999996, "text": " And researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 294.03999999999996, "end": 295.84, "text": " workstations, or servers."}, {"start": 295.84, "end": 301.2, "text": " Make sure to go to lambdalabs.com slash papers to sign up for one of their amazing GPU"}, {"start": 301.2, "end": 302.52, "text": " instances today."}, {"start": 302.52, "end": 307.32, "text": " Our thanks to Lambda for their long-standing support and for helping us make better videos"}, {"start": 307.32, "end": 308.32, "text": " for you."}, {"start": 308.32, "end": 335.76, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=IDMiMKWucaI
NERFIES: The Selfies of The Future! 🤳
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/stacey/xray/reports/X-Ray-Illumination--Vmlldzo4MzA5MQ 📝 The paper "Deformable Neural Radiance Fields" is available here: https://nerfies.github.io/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #nerfies #selfies
Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér. Today, we are going to get a taste of how insanely quick progress is in machine learning research. In March of 2020, a paper appeared that goes by the name Neural Radiance Fields, Nerf in short. With this technique, we could take a bunch of input photos, get a neural network to learn them, and then synthesize new previously unseen views of not just the materials in the scene, but the entire scene itself. And here we are talking not only digital environments, but also real scenes as well. Just to make sure, once again, it can learn and reproduce entire real-world scenes from only a few views by using neural networks. However, of course, Nerf had its limitations. For instance, in many cases, it has trouble with scenes with variable lighting conditions and lots of occluders. And to my delight, only five months later, in August of 2020, a follow-up paper by the name Nerf in the wild, or Nerf W in short. Its specialty was tourist attractions that a lot of people take photos of, and we then have a collection of photos taken during a different time of the day, and of course with a lot of people around. And lots of people, of course, means lots of occluders. Nerf W improved the original algorithm to excel more in cases like this. And we are still not done yet, because get this only three months later, on 2020, November 25th, another follow-up paper appeared by the name deformable neural radiance fields, D-Nerf. The goal here is to take a selfie video and turn it into a portrait that we can rotate around freely. This is something that the authors call a nerfie. If we take the original Nerf technique to perform this, we see that it does not do well at all with moving things. And here is where the deformable part of the name comes into play. And now, hold onto your papers and marvel at the results of the new D-Nerf technique. A clean reconstruction. We indeed get a nice portrait that we can rotate around freely, and all of the previous Nerf artifacts are gone. It performs well, even on difficult cases with beards, all kinds of hairstyles, and more. And now, hold onto your papers, because glasses work too, and not only that, but it even computes the proper reflection and refraction off of the lens. And this is just the start of a deluge of new features. For instance, we can even zoom out and capture the whole body of the test subject. Furthermore, it is not limited to people. It also works on dogs too, although in this case, we will have to settle with a lower resolution output. It can pull off the iconic dolly zoom effect really well. And, amusingly, we can even perform a Nerfception, which is recording ourselves, as we record ourselves. I hope that now you have a good feel of the pace of progress in machine learning research, which is absolutely incredible. So much progress in just 9 months of research. My goodness. What a time to be alive! This episode has been supported by weights and biases. In this post, they show you how to use their tool to analyze chest x-rays and more. If you work with learning algorithms on a regular basis, make sure to check out weights and biases. Their system is designed to help you organize your experiments, and it is so good it could shave off weeks or even months of work from your projects, and is completely free for all individuals, academics, and open source projects. This really is as good as it gets, and it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com, slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.76, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.76, "end": 10.88, "text": " Today, we are going to get a taste of how insanely quick progress is in machine learning research."}, {"start": 10.88, "end": 18.56, "text": " In March of 2020, a paper appeared that goes by the name Neural Radiance Fields, Nerf in short."}, {"start": 18.56, "end": 24.04, "text": " With this technique, we could take a bunch of input photos, get a neural network to learn them,"}, {"start": 24.04, "end": 32.6, "text": " and then synthesize new previously unseen views of not just the materials in the scene, but the entire scene itself."}, {"start": 32.6, "end": 38.519999999999996, "text": " And here we are talking not only digital environments, but also real scenes as well."}, {"start": 38.519999999999996, "end": 48.36, "text": " Just to make sure, once again, it can learn and reproduce entire real-world scenes from only a few views by using neural networks."}, {"start": 48.36, "end": 52.2, "text": " However, of course, Nerf had its limitations."}, {"start": 52.2, "end": 59.56, "text": " For instance, in many cases, it has trouble with scenes with variable lighting conditions and lots of occluders."}, {"start": 59.56, "end": 71.08, "text": " And to my delight, only five months later, in August of 2020, a follow-up paper by the name Nerf in the wild, or Nerf W in short."}, {"start": 71.08, "end": 76.12, "text": " Its specialty was tourist attractions that a lot of people take photos of,"}, {"start": 76.12, "end": 83.88000000000001, "text": " and we then have a collection of photos taken during a different time of the day, and of course with a lot of people around."}, {"start": 83.88000000000001, "end": 87.92, "text": " And lots of people, of course, means lots of occluders."}, {"start": 87.92, "end": 93.64, "text": " Nerf W improved the original algorithm to excel more in cases like this."}, {"start": 93.64, "end": 101.32000000000001, "text": " And we are still not done yet, because get this only three months later, on 2020, November 25th,"}, {"start": 101.32, "end": 107.88, "text": " another follow-up paper appeared by the name deformable neural radiance fields, D-Nerf."}, {"start": 107.88, "end": 115.16, "text": " The goal here is to take a selfie video and turn it into a portrait that we can rotate around freely."}, {"start": 115.16, "end": 120.44, "text": " This is something that the authors call a nerfie."}, {"start": 120.44, "end": 127.39999999999999, "text": " If we take the original Nerf technique to perform this, we see that it does not do well at all with moving things."}, {"start": 127.4, "end": 131.48000000000002, "text": " And here is where the deformable part of the name comes into play."}, {"start": 131.48000000000002, "end": 137.24, "text": " And now, hold onto your papers and marvel at the results of the new D-Nerf technique."}, {"start": 137.24, "end": 139.32, "text": " A clean reconstruction."}, {"start": 139.32, "end": 146.84, "text": " We indeed get a nice portrait that we can rotate around freely, and all of the previous Nerf artifacts are gone."}, {"start": 146.84, "end": 154.12, "text": " It performs well, even on difficult cases with beards, all kinds of hairstyles, and more."}, {"start": 154.12, "end": 162.04, "text": " And now, hold onto your papers, because glasses work too, and not only that,"}, {"start": 162.04, "end": 166.84, "text": " but it even computes the proper reflection and refraction off of the lens."}, {"start": 166.84, "end": 171.56, "text": " And this is just the start of a deluge of new features."}, {"start": 171.56, "end": 176.92000000000002, "text": " For instance, we can even zoom out and capture the whole body of the test subject."}, {"start": 176.92, "end": 183.07999999999998, "text": " Furthermore, it is not limited to people."}, {"start": 183.07999999999998, "end": 189.23999999999998, "text": " It also works on dogs too, although in this case, we will have to settle with a lower resolution output."}, {"start": 194.44, "end": 198.11999999999998, "text": " It can pull off the iconic dolly zoom effect really well."}, {"start": 198.12, "end": 210.20000000000002, "text": " And, amusingly, we can even perform a Nerfception, which is recording ourselves, as we record ourselves."}, {"start": 211.56, "end": 216.28, "text": " I hope that now you have a good feel of the pace of progress in machine learning research,"}, {"start": 216.28, "end": 222.52, "text": " which is absolutely incredible. So much progress in just 9 months of research."}, {"start": 223.24, "end": 225.88, "text": " My goodness. What a time to be alive!"}, {"start": 225.88, "end": 229.4, "text": " This episode has been supported by weights and biases."}, {"start": 229.4, "end": 234.6, "text": " In this post, they show you how to use their tool to analyze chest x-rays and more."}, {"start": 235.16, "end": 240.92, "text": " If you work with learning algorithms on a regular basis, make sure to check out weights and biases."}, {"start": 240.92, "end": 245.72, "text": " Their system is designed to help you organize your experiments, and it is so good"}, {"start": 245.72, "end": 249.64, "text": " it could shave off weeks or even months of work from your projects,"}, {"start": 249.64, "end": 255.16, "text": " and is completely free for all individuals, academics, and open source projects."}, {"start": 255.16, "end": 261.24, "text": " This really is as good as it gets, and it is hardly a surprise that they are now used by over 200"}, {"start": 261.24, "end": 267.88, "text": " companies and research institutions. Make sure to visit them through wnb.com, slash papers,"}, {"start": 267.88, "end": 272.44, "text": " or just click the link in the video description, and you can get a free demo today."}, {"start": 272.44, "end": 277.56, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better"}, {"start": 277.56, "end": 291.72, "text": " videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Lt4Z5oOAeEY
This AI Gave Elon Musk A Majestic Beard! 🧔
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/getting-started/reports/Debug-Compare-Reproduce-Machine-Learning-Models--VmlldzoyNzY5MDk?utm_source=karoly 📝 The paper "StyleFlow: Attribute-conditioned Exploration of StyleGAN-generated Images using Conditional Continuous Normalizing Flows" is available here. ⚠️ The source code is now also available! https://rameenabdal.github.io/StyleFlow/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #elonmusk #styleflow
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. Here you see people that don't exist. They don't exist because these images were created with a neural network-based learning method by the name Staggan2, which can not only create eye-poppingly detailed looking images, but it can also fuse these people together, or generate cars, churches, horses, and of course cats. This is quite convincing, so is there anything else to do in human phase generation? Are we done? Well, this footage from a new paper may give some of it away. If you have been watching this series for a while, you know that of course researchers always find a way to make these techniques better. We always say two more papers down the line, and it will be improved a great deal. And now we are one more paper down the line, so let's see together if there has been any improvement. This new technique is based on Staggan2 and is called Styrofo, and it can take an input photo of a test subject and edit a number of meaningful parameters. Age, expression, lighting, pose, you name it. Now, note that there were other techniques that could pull this off, but the key improvement here is that one, we can perform many sequential changes, and two, it does all this while remaining faithful to the original photo. And hold on to your papers because three, it can also help Elon Musk to grow a majestic beard. And believe it or not, you will also see a run of this algorithm on me as well at the end of the video. First, let's adjust the lighting a little, and now it's time for that beard. Loving it. Now, a little aging, and one more beard please. It seems to me that this beard is different from the young man beard, which is nice attention to detail. And note that we have strayed very far from the input image, but if you look at the intermediate states, you see that the essence of the test subject is indeed quite similar. This Elon is still Elon. Note that these results are not publicly available and were made specifically for us so you can only see this here on two minute papers. That is quite an honor, so thank you very much to Rameen Abdal, the first author of the paper for being so kind. Now, another key improvement in this work is that we can change one of these parameters with little effect on anything else. Have a look at this workflow and see how well we can perform these sequential edits. These are the source images and the labels showcase one variable change for each sub-sequent image and you can see how amazingly surgical the changes are. Which craft? If it says that we changed the facial hair, that's the only change I see in the output. And just think about the fact that these starting images are real photos, but the outputs of the algorithm are made up people that don't exist. And notice that the background is also mostly left untouched. Why does that matter? You will see in the next example. So far so good, but this method can do way more than this. Now let's edit not people, but cars. I love how well the color and pose variations work. Now, if you look here, you see that there is quite a bit of collateral damage, as not only the cars, but the background is also changing, opening up the door for a potentially more surgical follow-up paper. Make sure to subscribe and hit the bell icon to get notified when we cover this one more paper down the line. And now, here's what you have been waiting for, we'll get hands-on experience with this technique where I shall be the subject of the next experiment. First, the algorithm is looking for high-resolution frontal images, so for instance, this would not work at all, we would have to look for a different image. No matter, here's another one where I got locked up for reading too many papers. This could be used as an input for the algorithm. And look, this image looks a little different. Why is that? It is because StarGand 2 runs an embedding operation on the photo before starting its work. This is an interesting detail that we only see if we start using the technique ourselves. And now, come on, give me that beard. Oh, loving it. What do you think? Whose beard is better? Elans or mine? Let me know in the comments below. And now, please meet Old Mancaroy, he will tell you that papers were way better back in his day and the usual transformations. But also note that as a limitation, we had quite a bit of collateral damage for the background. This was quite an experience. Thank you so much. And we are not done yet because this paper just keeps on giving. It can also perform attribute transfer. What this means is that we have two input photos. This will be the source image, and we can choose a bunch of parameters that we would like to extract from it. For instance, the lighting and pose can be extracted through attribute transfer, and it seems to even estimate the age of the target subject and change the source to match it better. Loving it. The source code for this technique is also available and make sure to have a look at the paper. It is very thoroughly evaluated. The authors, when the extra mile there and it really shows. And I hope that you now agree, even if there is a technique that appears quite mature, researchers always find a way to further improve it. And who knew it took a competent AI to get me to grow a beard. Totally worth it. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how to use sweeps to automate hyper parameter optimization and explore the space of possible models and find the best one. During my PhD studies, I trained a ton of neural networks which were used in our experiments. However, over time, there was just too much data in our repositories and what I am looking for is not data, but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases is free for all individuals, academics, and open source projects. Make sure to visit them through www.nb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.48, "end": 7.44, "text": " Here you see people that don't exist."}, {"start": 7.44, "end": 12.48, "text": " They don't exist because these images were created with a neural network-based learning method"}, {"start": 12.48, "end": 18.56, "text": " by the name Staggan2, which can not only create eye-poppingly detailed looking images,"}, {"start": 18.56, "end": 27.04, "text": " but it can also fuse these people together, or generate cars, churches, horses, and of course cats."}, {"start": 27.04, "end": 33.04, "text": " This is quite convincing, so is there anything else to do in human phase generation?"}, {"start": 33.04, "end": 34.48, "text": " Are we done?"}, {"start": 34.48, "end": 38.56, "text": " Well, this footage from a new paper may give some of it away."}, {"start": 38.56, "end": 43.92, "text": " If you have been watching this series for a while, you know that of course researchers always find"}, {"start": 43.92, "end": 49.76, "text": " a way to make these techniques better. We always say two more papers down the line, and it will"}, {"start": 49.76, "end": 55.68, "text": " be improved a great deal. And now we are one more paper down the line, so let's see together"}, {"start": 55.68, "end": 61.84, "text": " if there has been any improvement. This new technique is based on Staggan2 and is called"}, {"start": 61.84, "end": 68.8, "text": " Styrofo, and it can take an input photo of a test subject and edit a number of meaningful parameters."}, {"start": 69.44, "end": 76.64, "text": " Age, expression, lighting, pose, you name it. Now, note that there were other techniques that could"}, {"start": 76.64, "end": 83.6, "text": " pull this off, but the key improvement here is that one, we can perform many sequential changes,"}, {"start": 83.6, "end": 88.55999999999999, "text": " and two, it does all this while remaining faithful to the original photo."}, {"start": 89.11999999999999, "end": 95.6, "text": " And hold on to your papers because three, it can also help Elon Musk to grow a majestic beard."}, {"start": 96.32, "end": 102.08, "text": " And believe it or not, you will also see a run of this algorithm on me as well at the end of the video."}, {"start": 103.03999999999999, "end": 109.28, "text": " First, let's adjust the lighting a little, and now it's time for that beard."}, {"start": 109.28, "end": 118.32000000000001, "text": " Loving it. Now, a little aging, and one more beard please."}, {"start": 119.04, "end": 124.88, "text": " It seems to me that this beard is different from the young man beard, which is nice attention to"}, {"start": 124.88, "end": 130.16, "text": " detail. And note that we have strayed very far from the input image, but if you look at the"}, {"start": 130.16, "end": 135.76, "text": " intermediate states, you see that the essence of the test subject is indeed quite similar."}, {"start": 135.76, "end": 143.51999999999998, "text": " This Elon is still Elon. Note that these results are not publicly available and were made specifically"}, {"start": 143.51999999999998, "end": 149.92, "text": " for us so you can only see this here on two minute papers. That is quite an honor, so thank you"}, {"start": 149.92, "end": 156.39999999999998, "text": " very much to Rameen Abdal, the first author of the paper for being so kind. Now, another key"}, {"start": 156.39999999999998, "end": 161.76, "text": " improvement in this work is that we can change one of these parameters with little effect on"}, {"start": 161.76, "end": 167.28, "text": " anything else. Have a look at this workflow and see how well we can perform these sequential"}, {"start": 167.28, "end": 173.84, "text": " edits. These are the source images and the labels showcase one variable change for each sub-sequent"}, {"start": 173.84, "end": 180.48, "text": " image and you can see how amazingly surgical the changes are. Which craft? If it says that we"}, {"start": 180.48, "end": 186.32, "text": " changed the facial hair, that's the only change I see in the output. And just think about the"}, {"start": 186.32, "end": 193.12, "text": " fact that these starting images are real photos, but the outputs of the algorithm are made up people"}, {"start": 193.12, "end": 198.48, "text": " that don't exist. And notice that the background is also mostly left untouched."}, {"start": 200.0, "end": 206.56, "text": " Why does that matter? You will see in the next example. So far so good, but this method can do"}, {"start": 206.56, "end": 215.2, "text": " way more than this. Now let's edit not people, but cars. I love how well the color and pose variations"}, {"start": 215.2, "end": 228.95999999999998, "text": " work. Now, if you look here, you see that there is quite a bit of collateral damage,"}, {"start": 228.95999999999998, "end": 234.72, "text": " as not only the cars, but the background is also changing, opening up the door for a potentially"}, {"start": 234.72, "end": 240.72, "text": " more surgical follow-up paper. Make sure to subscribe and hit the bell icon to get notified when"}, {"start": 240.72, "end": 246.8, "text": " we cover this one more paper down the line. And now, here's what you have been waiting for,"}, {"start": 246.8, "end": 252.64, "text": " we'll get hands-on experience with this technique where I shall be the subject of the next experiment."}, {"start": 253.36, "end": 258.56, "text": " First, the algorithm is looking for high-resolution frontal images, so for instance,"}, {"start": 258.56, "end": 264.24, "text": " this would not work at all, we would have to look for a different image. No matter, here's another"}, {"start": 264.24, "end": 271.2, "text": " one where I got locked up for reading too many papers. This could be used as an input for the algorithm."}, {"start": 271.2, "end": 279.44, "text": " And look, this image looks a little different. Why is that? It is because StarGand 2 runs an"}, {"start": 279.44, "end": 284.8, "text": " embedding operation on the photo before starting its work. This is an interesting detail that we"}, {"start": 284.8, "end": 290.96000000000004, "text": " only see if we start using the technique ourselves. And now, come on, give me that beard."}, {"start": 290.96, "end": 299.59999999999997, "text": " Oh, loving it. What do you think? Whose beard is better? Elans or mine? Let me know in the comments"}, {"start": 299.59999999999997, "end": 306.08, "text": " below. And now, please meet Old Mancaroy, he will tell you that papers were way better back in his"}, {"start": 306.08, "end": 313.2, "text": " day and the usual transformations. But also note that as a limitation, we had quite a bit of"}, {"start": 313.2, "end": 319.12, "text": " collateral damage for the background. This was quite an experience. Thank you so much."}, {"start": 319.12, "end": 325.76, "text": " And we are not done yet because this paper just keeps on giving. It can also perform attribute"}, {"start": 325.76, "end": 331.28000000000003, "text": " transfer. What this means is that we have two input photos. This will be the source image,"}, {"start": 331.28000000000003, "end": 336.56, "text": " and we can choose a bunch of parameters that we would like to extract from it. For instance,"}, {"start": 336.56, "end": 342.96, "text": " the lighting and pose can be extracted through attribute transfer, and it seems to even estimate"}, {"start": 342.96, "end": 349.76, "text": " the age of the target subject and change the source to match it better. Loving it. The source code"}, {"start": 349.76, "end": 355.28, "text": " for this technique is also available and make sure to have a look at the paper. It is very thoroughly"}, {"start": 355.28, "end": 362.24, "text": " evaluated. The authors, when the extra mile there and it really shows. And I hope that you now agree,"}, {"start": 362.24, "end": 368.0, "text": " even if there is a technique that appears quite mature, researchers always find a way to further"}, {"start": 368.0, "end": 375.28, "text": " improve it. And who knew it took a competent AI to get me to grow a beard. Totally worth it."}, {"start": 375.28, "end": 381.12, "text": " What a time to be alive. This episode has been supported by weights and biases. In this post,"}, {"start": 381.12, "end": 387.6, "text": " they show you how to use sweeps to automate hyper parameter optimization and explore the space"}, {"start": 387.6, "end": 394.4, "text": " of possible models and find the best one. During my PhD studies, I trained a ton of neural networks"}, {"start": 394.4, "end": 400.88, "text": " which were used in our experiments. However, over time, there was just too much data in our repositories"}, {"start": 400.88, "end": 407.12, "text": " and what I am looking for is not data, but insight. And that's exactly how weights and biases"}, {"start": 407.12, "end": 412.64, "text": " helps you by organizing your experiments. It is used by more than 200 companies and research"}, {"start": 412.64, "end": 420.15999999999997, "text": " institutions, including OpenAI, Toyota Research, GitHub, and more. And get this, weights and biases"}, {"start": 420.16, "end": 426.16, "text": " is free for all individuals, academics, and open source projects. Make sure to visit them"}, {"start": 426.16, "end": 432.56, "text": " through www.nb.com slash papers or just click the link in the video description and you can get"}, {"start": 432.56, "end": 437.76000000000005, "text": " a free demo today. Our thanks to weights and biases for their long-standing support and for"}, {"start": 437.76000000000005, "end": 442.24, "text": " helping us make better videos for you. Thanks for watching and for your generous support,"}, {"start": 442.24, "end": 452.24, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=tiO43nJKGJY
Is Simulating Jelly And Bunnies Possible? 🐰
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/getting-started/reports/Debug-Compare-Reproduce-Machine-Learning-Models--VmlldzoyNzY5MDk?utm_source=karoly 📝 The paper "Monolith: A Monolithic Pressure-Viscosity-Contact Solver for Strong Two-Way Rigid-Rigid Rigid-Fluid Coupling" is available here: https://tetsuya-takahashi.github.io/Monolith/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir. This new paper fixes many common problems when it comes to two-way coupling in fluid simulations. And of course, the first question is, what is two-way coupling? It means that here the boxes are allowed to move the smoke and the added two-way coupling part means that now the smoke is also allowed to blow away the boxes. What's more, the vortices here on the right are even able to suspend the red box in the air for a few seconds. An excellent demonstration of a beautiful phenomenon. However, simulating this effect properly for water simulations and for gooey materials is a huge challenge, so let's see how traditional methods deal with them. Experiment number one. Water bunnies. Do you see what I am seeing here? Did you see the magic trick? Let's look again. Observe how much water we are starting with. A full bunny worth of water, and then by the end we have maybe a quarter of a bunny left. Oh yes, we have a substantial amount of numerical dissipation in the simulator that leads to volume loss. Can this be solved somehow? Well, let's see how this new work deals with this. Starting with one bunny. And ending it with one bunny. Nice. Just look at the difference of the volume of water left with a new method compared to the previous one. Night and day difference. And this was not even the worst volume loss I've seen. Make sure to hold on to your papers and check out this one. Experiment number two. Gooey dragons and balls. When using a traditional technique, Whoa, this guy is gone. And when we try a different method, it does this. My goodness. So let's see if the new method can deal with this case. Oh yes, yes it can. And now onwards to experiment number three. If you think that research is about throwing things at the wall and seeing what sticks in the case of this scene, you are not wrong. So what should happen here given these materials? Well, the bunny should stick to the goo and not fall too quickly. Hmm, none of which happens here. The previous method does not simulate viscosity properly and hence this artificial melting phenomenon emerges. I wonder if the new method can do this too. And yes, they stick together and the goo correctly slows down the fall of the bunny. So how does this magic work? Normally in these simulations we have to compute pressure, viscosity and frictional contact separately, which are three different tasks. The technique described in the paper is called monolith because it has a monolithic pressure viscosity contact solver. Yes, this means that it does all three of these tasks in one go, which is mathematically a tiny bit more involved, but it gives us a proper simulator where water and goo can interact with solids. No volume loss, no artificial melting, no crazy jumpy behavior. And here comes the punchline. I was thinking that all right, a more accurate simulator, that is always welcome, but what is the price of this accuracy? How much longer do I have to wait? If you have been holding on to your papers, now squeeze that paper because this technique is not slower, but up to 10 times faster than previous methods, and that's where I fell off the chair when reading this paper. And with this, I hope that we will be able to marvel at even more delightful two way coupled simulations in the near future. What a time to be alive. This episode has been supported by weights and biases. In this post, they show you how you can get an email or Slack notification when your model crashes. With this, you can check on your model performance on any device. Heavenly. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.46, "end": 11.200000000000001, "text": " This new paper fixes many common problems when it comes to two-way coupling in fluid simulations."}, {"start": 11.200000000000001, "end": 15.4, "text": " And of course, the first question is, what is two-way coupling?"}, {"start": 15.4, "end": 19.6, "text": " It means that here the boxes are allowed to move the smoke"}, {"start": 19.6, "end": 27.2, "text": " and the added two-way coupling part means that now the smoke is also allowed to blow away the boxes."}, {"start": 27.2, "end": 34.8, "text": " What's more, the vortices here on the right are even able to suspend the red box in the air for a few seconds."}, {"start": 34.8, "end": 38.6, "text": " An excellent demonstration of a beautiful phenomenon."}, {"start": 38.6, "end": 46.0, "text": " However, simulating this effect properly for water simulations and for gooey materials is a huge challenge,"}, {"start": 46.0, "end": 49.4, "text": " so let's see how traditional methods deal with them."}, {"start": 49.4, "end": 53.2, "text": " Experiment number one. Water bunnies."}, {"start": 53.2, "end": 58.0, "text": " Do you see what I am seeing here? Did you see the magic trick?"}, {"start": 58.0, "end": 59.400000000000006, "text": " Let's look again."}, {"start": 59.400000000000006, "end": 62.6, "text": " Observe how much water we are starting with."}, {"start": 62.6, "end": 70.0, "text": " A full bunny worth of water, and then by the end we have maybe a quarter of a bunny left."}, {"start": 70.0, "end": 77.80000000000001, "text": " Oh yes, we have a substantial amount of numerical dissipation in the simulator that leads to volume loss."}, {"start": 77.80000000000001, "end": 80.0, "text": " Can this be solved somehow?"}, {"start": 80.0, "end": 83.6, "text": " Well, let's see how this new work deals with this."}, {"start": 83.6, "end": 88.0, "text": " Starting with one bunny."}, {"start": 88.0, "end": 91.2, "text": " And ending it with one bunny. Nice."}, {"start": 91.2, "end": 97.4, "text": " Just look at the difference of the volume of water left with a new method compared to the previous one."}, {"start": 97.4, "end": 99.4, "text": " Night and day difference."}, {"start": 99.4, "end": 102.6, "text": " And this was not even the worst volume loss I've seen."}, {"start": 102.6, "end": 107.2, "text": " Make sure to hold on to your papers and check out this one."}, {"start": 107.2, "end": 111.8, "text": " Experiment number two. Gooey dragons and balls."}, {"start": 111.8, "end": 113.8, "text": " When using a traditional technique,"}, {"start": 113.8, "end": 117.2, "text": " Whoa, this guy is gone."}, {"start": 117.2, "end": 121.60000000000001, "text": " And when we try a different method, it does this."}, {"start": 121.60000000000001, "end": 123.6, "text": " My goodness."}, {"start": 123.6, "end": 129.2, "text": " So let's see if the new method can deal with this case."}, {"start": 129.2, "end": 131.6, "text": " Oh yes, yes it can."}, {"start": 131.6, "end": 135.0, "text": " And now onwards to experiment number three."}, {"start": 135.0, "end": 141.6, "text": " If you think that research is about throwing things at the wall and seeing what sticks in the case of this scene,"}, {"start": 141.6, "end": 143.2, "text": " you are not wrong."}, {"start": 143.2, "end": 146.8, "text": " So what should happen here given these materials?"}, {"start": 146.8, "end": 151.8, "text": " Well, the bunny should stick to the goo and not fall too quickly."}, {"start": 151.8, "end": 154.8, "text": " Hmm, none of which happens here."}, {"start": 154.8, "end": 162.0, "text": " The previous method does not simulate viscosity properly and hence this artificial melting phenomenon emerges."}, {"start": 162.0, "end": 166.0, "text": " I wonder if the new method can do this too."}, {"start": 166.0, "end": 174.2, "text": " And yes, they stick together and the goo correctly slows down the fall of the bunny."}, {"start": 174.2, "end": 176.6, "text": " So how does this magic work?"}, {"start": 176.6, "end": 183.4, "text": " Normally in these simulations we have to compute pressure, viscosity and frictional contact separately,"}, {"start": 183.4, "end": 185.6, "text": " which are three different tasks."}, {"start": 185.6, "end": 193.2, "text": " The technique described in the paper is called monolith because it has a monolithic pressure viscosity contact solver."}, {"start": 193.2, "end": 200.79999999999998, "text": " Yes, this means that it does all three of these tasks in one go, which is mathematically a tiny bit more involved,"}, {"start": 200.79999999999998, "end": 207.0, "text": " but it gives us a proper simulator where water and goo can interact with solids."}, {"start": 207.0, "end": 212.4, "text": " No volume loss, no artificial melting, no crazy jumpy behavior."}, {"start": 212.4, "end": 214.79999999999998, "text": " And here comes the punchline."}, {"start": 214.8, "end": 223.0, "text": " I was thinking that all right, a more accurate simulator, that is always welcome, but what is the price of this accuracy?"}, {"start": 223.0, "end": 225.8, "text": " How much longer do I have to wait?"}, {"start": 225.8, "end": 232.20000000000002, "text": " If you have been holding on to your papers, now squeeze that paper because this technique is not slower,"}, {"start": 232.20000000000002, "end": 239.8, "text": " but up to 10 times faster than previous methods, and that's where I fell off the chair when reading this paper."}, {"start": 239.8, "end": 247.60000000000002, "text": " And with this, I hope that we will be able to marvel at even more delightful two way coupled simulations in the near future."}, {"start": 247.60000000000002, "end": 249.4, "text": " What a time to be alive."}, {"start": 249.4, "end": 252.60000000000002, "text": " This episode has been supported by weights and biases."}, {"start": 252.60000000000002, "end": 258.8, "text": " In this post, they show you how you can get an email or Slack notification when your model crashes."}, {"start": 258.8, "end": 262.8, "text": " With this, you can check on your model performance on any device."}, {"start": 262.8, "end": 263.8, "text": " Heavenly."}, {"start": 263.8, "end": 269.8, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 269.8, "end": 279.8, "text": " Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 279.8, "end": 286.8, "text": " And the best part is that weights and biases is free for all individuals, academics, and open source projects."}, {"start": 286.8, "end": 288.8, "text": " It really is as good as it gets."}, {"start": 288.8, "end": 297.8, "text": " Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get the free demo today."}, {"start": 297.8, "end": 303.8, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you."}, {"start": 303.8, "end": 327.8, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=JmVQJg-glYA
Painting the Mona Lisa...With Triangles! 📐
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Differentiable Vector Graphics Rasterization for Editing and Learning" is available here: - https://people.csail.mit.edu/tzumao/diffvg/ - https://people.csail.mit.edu/tzumao/diffvg/supplementary_webpage/ The mentioned Mona Lisa genetic algorithm is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/mona_lisa_parallel_genetic_algorithm/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #vectorgraphics
Dear Fellow Scholars, this is two-minute papers with Dr. Karaj Zonai-Fehir. What you see here is a bunch of vector images. Vector images are not like most images that you see on the internet. Those are RESTR images. Those are like photos and are made of pixels. While vector images are not made of pixels, they are made of shapes. These vector images have lots of advantages. They have really small file sizes. Can be zoomed into as much as we desire and things don't get pixelated. And hence, vector images are really well suited for logos, user interface icons, and more. Now, if we wish to, we can convert vector images into RESTR images, so the shapes will become pixels. This is easy, but here is the problem. If we do it once, there is no going back, or at least not easily. This method promises to make this conversion a two-way street, so we can take a RESTR image, a photo, if you will, and work with it, as if it were a vector image. Now, what does that mean? Oh, boy, a lot of goodies. For instance, we can perform sculpting, or in other words, manipulating shapes without touching any pixels. We can work with the shapes here instead. And much easier, or my favorite, perform painterly rendering. Now, what you see here is not the new algorithm performing this. This is a genetic algorithm I wrote a few years ago that takes a target image, which is the Mona Lisa here, takes a bunch of randomly colored triangles, and starts reorganizing them to get as close to the target image as possible. The source code and the video explaining how it works is available in the video description. And now, let's see how this new method performs on a similar task. It can start with a large number of different shapes, and just look at how beautifully these shapes evolve, and start converging to the target image. Loving it. But that's not all. It also has a nice solution to an old, but challenging problem in computer graphics that is referred to as seam carving. If you ask me, I like to call it image squishing. Why? Well, look here. This gives us an easy way of intelligently squishing an image into different aspect ratios. So good. So, can we measure how well it does what it does? How does it compare to Adobe's state-of-the-art method when vectorizing a photo? Well, it can not only do more, but it also does it better. The new method is significantly closer to the target image here, no question about it. And now comes the best part. It not only provides higher quality results than the previous methods, but it only takes approximately a second to perform all this. Wow. So, there you go. Finally, with this technique, we can edit pixels as if they weren't pixels at all. It feels like we are living in a science fiction world. What a time to be alive. This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Azure. Plus, they are the only Cloud service with 48GB, RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to lambdaleps.com, slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karaj Zonai-Fehir."}, {"start": 4.48, "end": 7.28, "text": " What you see here is a bunch of vector images."}, {"start": 7.28, "end": 11.6, "text": " Vector images are not like most images that you see on the internet."}, {"start": 11.6, "end": 13.44, "text": " Those are RESTR images."}, {"start": 13.44, "end": 16.72, "text": " Those are like photos and are made of pixels."}, {"start": 16.72, "end": 21.2, "text": " While vector images are not made of pixels, they are made of shapes."}, {"start": 21.2, "end": 23.76, "text": " These vector images have lots of advantages."}, {"start": 23.76, "end": 26.0, "text": " They have really small file sizes."}, {"start": 26.0, "end": 31.44, "text": " Can be zoomed into as much as we desire and things don't get pixelated."}, {"start": 31.44, "end": 35.2, "text": " And hence, vector images are really well suited for logos,"}, {"start": 35.2, "end": 37.36, "text": " user interface icons, and more."}, {"start": 38.08, "end": 43.28, "text": " Now, if we wish to, we can convert vector images into RESTR images,"}, {"start": 43.28, "end": 45.44, "text": " so the shapes will become pixels."}, {"start": 46.0, "end": 49.04, "text": " This is easy, but here is the problem."}, {"start": 49.04, "end": 53.519999999999996, "text": " If we do it once, there is no going back, or at least not easily."}, {"start": 53.52, "end": 57.2, "text": " This method promises to make this conversion a two-way street,"}, {"start": 57.2, "end": 61.92, "text": " so we can take a RESTR image, a photo, if you will, and work with it,"}, {"start": 61.92, "end": 63.84, "text": " as if it were a vector image."}, {"start": 64.56, "end": 66.08, "text": " Now, what does that mean?"}, {"start": 66.56, "end": 68.64, "text": " Oh, boy, a lot of goodies."}, {"start": 68.64, "end": 72.32000000000001, "text": " For instance, we can perform sculpting, or in other words,"}, {"start": 72.32000000000001, "end": 75.60000000000001, "text": " manipulating shapes without touching any pixels."}, {"start": 76.32000000000001, "end": 78.56, "text": " We can work with the shapes here instead."}, {"start": 78.56, "end": 83.68, "text": " And much easier, or my favorite, perform painterly rendering."}, {"start": 84.4, "end": 88.32000000000001, "text": " Now, what you see here is not the new algorithm performing this."}, {"start": 88.32000000000001, "end": 91.92, "text": " This is a genetic algorithm I wrote a few years ago"}, {"start": 91.92, "end": 95.36, "text": " that takes a target image, which is the Mona Lisa here,"}, {"start": 95.36, "end": 98.64, "text": " takes a bunch of randomly colored triangles,"}, {"start": 98.64, "end": 103.68, "text": " and starts reorganizing them to get as close to the target image as possible."}, {"start": 103.68, "end": 109.04, "text": " The source code and the video explaining how it works is available in the video description."}, {"start": 109.04, "end": 113.92, "text": " And now, let's see how this new method performs on a similar task."}, {"start": 113.92, "end": 116.72000000000001, "text": " It can start with a large number of different shapes,"}, {"start": 116.72000000000001, "end": 120.72, "text": " and just look at how beautifully these shapes evolve,"}, {"start": 120.72, "end": 122.88000000000001, "text": " and start converging to the target image."}, {"start": 122.88000000000001, "end": 124.48, "text": " Loving it."}, {"start": 124.48, "end": 126.32000000000001, "text": " But that's not all."}, {"start": 126.32000000000001, "end": 128.96, "text": " It also has a nice solution to an old,"}, {"start": 128.96, "end": 133.28, "text": " but challenging problem in computer graphics that is referred to as"}, {"start": 133.28, "end": 134.88, "text": " seam carving."}, {"start": 134.88, "end": 138.64000000000001, "text": " If you ask me, I like to call it image squishing."}, {"start": 138.64000000000001, "end": 139.84, "text": " Why?"}, {"start": 139.84, "end": 142.08, "text": " Well, look here."}, {"start": 142.08, "end": 145.84, "text": " This gives us an easy way of intelligently squishing an image"}, {"start": 145.84, "end": 148.08, "text": " into different aspect ratios."}, {"start": 148.08, "end": 150.08, "text": " So good."}, {"start": 150.08, "end": 154.24, "text": " So, can we measure how well it does what it does?"}, {"start": 154.24, "end": 157.68, "text": " How does it compare to Adobe's state-of-the-art method"}, {"start": 157.68, "end": 159.68, "text": " when vectorizing a photo?"}, {"start": 159.68, "end": 165.44, "text": " Well, it can not only do more, but it also does it better."}, {"start": 165.44, "end": 169.36, "text": " The new method is significantly closer to the target image here,"}, {"start": 169.36, "end": 170.88, "text": " no question about it."}, {"start": 170.88, "end": 172.88, "text": " And now comes the best part."}, {"start": 172.88, "end": 177.04000000000002, "text": " It not only provides higher quality results than the previous methods,"}, {"start": 177.04000000000002, "end": 181.84, "text": " but it only takes approximately a second to perform all this."}, {"start": 182.64000000000001, "end": 183.84, "text": " Wow."}, {"start": 183.84, "end": 185.36, "text": " So, there you go."}, {"start": 185.36, "end": 188.4, "text": " Finally, with this technique, we can edit pixels"}, {"start": 188.4, "end": 190.48000000000002, "text": " as if they weren't pixels at all."}, {"start": 191.12, "end": 194.56, "text": " It feels like we are living in a science fiction world."}, {"start": 194.56, "end": 195.84, "text": " What a time to be alive."}, {"start": 196.48000000000002, "end": 199.92000000000002, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 199.92000000000002, "end": 203.52, "text": " If you're looking for inexpensive Cloud GPUs for AI,"}, {"start": 203.52, "end": 205.92000000000002, "text": " check out Lambda GPU Cloud."}, {"start": 205.92000000000002, "end": 210.64000000000001, "text": " They've recently launched Quadro RTX 6000, RTX 8000,"}, {"start": 210.64000000000001, "end": 212.72, "text": " and V100 instances."}, {"start": 212.72, "end": 216.32, "text": " And hold onto your papers because Lambda GPU Cloud"}, {"start": 216.32, "end": 220.32, "text": " can cost less than half of AWS and Azure."}, {"start": 220.32, "end": 225.68, "text": " Plus, they are the only Cloud service with 48GB, RTX 8000."}, {"start": 225.68, "end": 228.56, "text": " Join researchers at organizations like Apple,"}, {"start": 228.56, "end": 232.16, "text": " MIT, and Caltech in using Lambda Cloud instances,"}, {"start": 232.16, "end": 233.92, "text": " workstations, or servers."}, {"start": 233.92, "end": 236.0, "text": " Make sure to go to lambdaleps.com,"}, {"start": 236.0, "end": 240.72, "text": " slash papers to sign up for one of their amazing GPU instances today."}, {"start": 240.72, "end": 243.44, "text": " Our thanks to Lambda for their long-standing support"}, {"start": 243.44, "end": 246.24, "text": " and for helping us make better videos for you."}, {"start": 246.24, "end": 248.48, "text": " Thanks for watching and for your generous support,"}, {"start": 248.48, "end": 275.2, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Sr2ga3BBMTc
Can An AI Design Our Tax Policy? 💰📊
❤️ Check out Perceptilabs and sign up for a free demo here: https://perceptilabs.com/papers 📝 The paper "The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies" is available here: https://blog.einstein.ai/the-ai-economist/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #taxpolicy #taxes
Dear Fellow Scholars, this is two minute papers with Dr. Karajon A. Feher. We hear many opinions every day on how text policy should change, what we should do, and how it would affect us. But of course, whenever we change something, we have to be mindful of its ramifications. Experimenting with text policies willy-nilly is of course ill-advised in the real world, but we are researchers so we can create our little virtual world, populate them with virtual character A.I.s, and we can experiment all we want here. Just imagine a future where a politician comes and says, I will lift up the middle class by creating policy X, well, let's simulate that policy X in a virtual world and see if it actually works as they promised. That would be glorious, but of course, I know, I know, wishful thinking, right? But maybe with this paper not so much. Look, these workers are reinforcement learning agents, which means that they observe their environment, inquire what the text rates and other parameters are, and decide how to proceed. They start working, they pay their taxes, and over time, they learn to maximize their own well-being given a tax system. This is the inner loop of the simulation. Now comes the even cooler part, the outer loop. This means that we have not only simulated worker A.I.s, but simulated policy maker A.I.s 2, and they look at inequality, wealth, distribution, and other market dynamics, and adjust the tax policy to maximize something that we find important. And herein lies the key of the experiment. Let's start with the goal of the simulation. We seek a tax policy that maximizes equality and productivity at the same time. This is, of course, immensely difficult. Every decision comes with its own trade-offs. We talk about the results in a moment, but first, let's marvel at the fact that it simulates agents with higher and lower skills, and not only that, but with these, specialization starts appearing. For instance, the lower skill agents started gathering and selling materials where the higher skill agents started buying up the materials to build houses. The goal of the policy maker A.I. is to maximize the equality and productivity of all of these agents. As a comparison, it also simulated the 2018 US Federal Tax Rate, an analytical tax formula where the marginal tax rate decreases with income, a free market model, and also proposed its own tax policies. So, how well did it do? Let's have a look at the results together. The free market model excels in maximizing productivity, which sounds great, until we find out that it does this at the cost of equality. Look, the top agent owns almost everything, leaving nearly nothing for everyone else. The US Federal Tax and Analytical models strike a better balance between the two, but neither seems optimal. So, where is the A.I. economic model on this curve? Well, hold on to your papers because it is here. It improved the trade-off between equality and productivity by 16% by proposing a system that is harder to game, that gives a bigger piece of the pie for the middle class and subsidizes lower skill A.I. workers. And do not forget that the key here is that it not only proposes things, but it can prove that these policies serve everyone, at least within the constraints of this simulation. Now, that's what I call a great paper. And there is so much more useful knowledge in this paper, I really urge you to have a look, it is a fantastic read. And needless to say, I'd love to see more research in this area. For instance, I'd love to know what happens if we start optimizing for sustainability as well. These objectives can be specified in this little virtual world, and we can experiment with what happens if we are guided by these requirements. And now, onwards to more transparent tax policies. What a time to be alive! Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible. This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it. Look, it lets you toggle between the visual modeler and the code editor. It even generates visualizations for all the model variables and gives you recommendations both during modeling and training, and does all this automatically. I only wish I had a tool like this when I was working on my neural networks during my PhD years. Visit perceptilebs.com slash papers to easily install the free local version of their system today. Our thanks to perceptilebs for their support, and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karajon A. Feher."}, {"start": 4.6000000000000005, "end": 12.3, "text": " We hear many opinions every day on how text policy should change, what we should do, and how it would affect us."}, {"start": 12.3, "end": 17.0, "text": " But of course, whenever we change something, we have to be mindful of its ramifications."}, {"start": 17.0, "end": 22.3, "text": " Experimenting with text policies willy-nilly is of course ill-advised in the real world,"}, {"start": 22.3, "end": 32.8, "text": " but we are researchers so we can create our little virtual world, populate them with virtual character A.I.s, and we can experiment all we want here."}, {"start": 32.8, "end": 40.3, "text": " Just imagine a future where a politician comes and says, I will lift up the middle class by creating policy X,"}, {"start": 40.3, "end": 47.8, "text": " well, let's simulate that policy X in a virtual world and see if it actually works as they promised."}, {"start": 47.8, "end": 54.3, "text": " That would be glorious, but of course, I know, I know, wishful thinking, right?"}, {"start": 54.3, "end": 57.8, "text": " But maybe with this paper not so much."}, {"start": 57.8, "end": 64.3, "text": " Look, these workers are reinforcement learning agents, which means that they observe their environment,"}, {"start": 64.3, "end": 70.3, "text": " inquire what the text rates and other parameters are, and decide how to proceed."}, {"start": 70.3, "end": 79.3, "text": " They start working, they pay their taxes, and over time, they learn to maximize their own well-being given a tax system."}, {"start": 79.3, "end": 85.3, "text": " This is the inner loop of the simulation. Now comes the even cooler part, the outer loop."}, {"start": 85.3, "end": 92.3, "text": " This means that we have not only simulated worker A.I.s, but simulated policy maker A.I.s 2,"}, {"start": 92.3, "end": 103.3, "text": " and they look at inequality, wealth, distribution, and other market dynamics, and adjust the tax policy to maximize something that we find important."}, {"start": 103.3, "end": 106.3, "text": " And herein lies the key of the experiment."}, {"start": 106.3, "end": 109.3, "text": " Let's start with the goal of the simulation."}, {"start": 109.3, "end": 116.3, "text": " We seek a tax policy that maximizes equality and productivity at the same time."}, {"start": 116.3, "end": 119.3, "text": " This is, of course, immensely difficult."}, {"start": 119.3, "end": 124.3, "text": " Every decision comes with its own trade-offs. We talk about the results in a moment, but first,"}, {"start": 124.3, "end": 134.3, "text": " let's marvel at the fact that it simulates agents with higher and lower skills, and not only that, but with these, specialization starts appearing."}, {"start": 134.3, "end": 144.3, "text": " For instance, the lower skill agents started gathering and selling materials where the higher skill agents started buying up the materials to build houses."}, {"start": 144.3, "end": 151.3, "text": " The goal of the policy maker A.I. is to maximize the equality and productivity of all of these agents."}, {"start": 151.3, "end": 166.3, "text": " As a comparison, it also simulated the 2018 US Federal Tax Rate, an analytical tax formula where the marginal tax rate decreases with income, a free market model, and also proposed its own tax policies."}, {"start": 166.3, "end": 171.3, "text": " So, how well did it do? Let's have a look at the results together."}, {"start": 171.3, "end": 181.3, "text": " The free market model excels in maximizing productivity, which sounds great, until we find out that it does this at the cost of equality."}, {"start": 181.3, "end": 187.3, "text": " Look, the top agent owns almost everything, leaving nearly nothing for everyone else."}, {"start": 187.3, "end": 194.3, "text": " The US Federal Tax and Analytical models strike a better balance between the two, but neither seems optimal."}, {"start": 194.3, "end": 198.3, "text": " So, where is the A.I. economic model on this curve?"}, {"start": 198.3, "end": 202.3, "text": " Well, hold on to your papers because it is here."}, {"start": 202.3, "end": 217.3, "text": " It improved the trade-off between equality and productivity by 16% by proposing a system that is harder to game, that gives a bigger piece of the pie for the middle class and subsidizes lower skill A.I. workers."}, {"start": 217.3, "end": 228.3, "text": " And do not forget that the key here is that it not only proposes things, but it can prove that these policies serve everyone, at least within the constraints of this simulation."}, {"start": 228.3, "end": 237.3, "text": " Now, that's what I call a great paper. And there is so much more useful knowledge in this paper, I really urge you to have a look, it is a fantastic read."}, {"start": 237.3, "end": 241.3, "text": " And needless to say, I'd love to see more research in this area."}, {"start": 241.3, "end": 247.3, "text": " For instance, I'd love to know what happens if we start optimizing for sustainability as well."}, {"start": 247.3, "end": 255.3, "text": " These objectives can be specified in this little virtual world, and we can experiment with what happens if we are guided by these requirements."}, {"start": 255.3, "end": 259.3, "text": " And now, onwards to more transparent tax policies."}, {"start": 259.3, "end": 268.3, "text": " What a time to be alive! Perceptilebs is a visual API for TensorFlow carefully designed to make machine learning as intuitive as possible."}, {"start": 268.3, "end": 278.3, "text": " This gives you a faster way to build out models with more transparency into how your model is architected, how it performs, and how to debug it."}, {"start": 278.3, "end": 282.3, "text": " Look, it lets you toggle between the visual modeler and the code editor."}, {"start": 282.3, "end": 292.3, "text": " It even generates visualizations for all the model variables and gives you recommendations both during modeling and training, and does all this automatically."}, {"start": 292.3, "end": 298.3, "text": " I only wish I had a tool like this when I was working on my neural networks during my PhD years."}, {"start": 298.3, "end": 305.3, "text": " Visit perceptilebs.com slash papers to easily install the free local version of their system today."}, {"start": 305.3, "end": 310.3, "text": " Our thanks to perceptilebs for their support, and for helping us make better videos for you."}, {"start": 310.3, "end": 323.3, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=BjkgyKEQbSM
What Is 3D Photography? 🎑
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/authors/One-Shot-3D-Photography/reports/Paper-Summary-One-Shot-3D-Photography--VmlldzozNjE2MjQ 📝 The paper "One Shot 3D Photography" is available here: https://facebookresearch.github.io/one_shot_3d_photography/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #3dphotos
Dear Fellow Scholars, this is two minute papers with Dr. Karajjola Efehir. This is a standard color photo made with a smartphone. Hence, it contains only a two-dere presentation of the world, and when we look at it, our brain is able to reconstruct the 3D information from it. And I wonder, would it be possible for an AI to do the same and go all the way and create a 3D version of this photo that we can rotate around? Well, this new learning based method promises exactly that, and if that is at all possible, even more. These are big words, so let's have a look if it can indeed live up to its promise. So, first, we take a photograph and we'll find out together in a moment what kind of phone is needed for this. Probably an amazing one, right? For now, this will be the input, and now, let's see the 3D photo as an output. Let's rotate this around, and wow, this is amazing. And you know what is even more amazing, since pretty much every smartphone is equipped with a gyroscope, these photos can be rotated around in harmony with the rotation of our phones, and wait a second, is this some sort of misunderstanding, or do I see correctly that we can even look behind the human if we wanted to? That content was not even part of the original photo. How does this work? More on that in a moment. Also, just imagine putting on a pair of VR glasses and looking at a plane to the photo and get an experience as if we were really there. It truly feels like we are living in a science fiction world. If we grab our trusty smartphone and use these images, we can create a timeline full of these 3D photos and marvel at how beautifully we can scroll such a timeline here. And now we have piled up quite a few questions here. How is this wizardry possible? What kind of phone do we need for this? Do we need a depth sensor? Maybe even LIDAR. Let's look under the hood and find out together. This is the input. One colored photograph that is expected, and let's continue. Goodness, now this is unexpected. The algorithm creates a depth map by itself. This depth map tells the algorithm how far different parts of the image are from the camera. Just look at how crisp the outlines are. My goodness. So good. Then with this depth information, it now has an understanding of what is where in this image and creates these layers. Which is, unfortunately, not much help. As you remember, we don't have any information on what is behind the person. No matter because we can use a technique that implements image in painting to fill in these regions with sensible data. And now, with this, we can start exploring these 3D photos. So if it created this depth map from the color information, this means that we don't even need a depth sensor for this. Just a simple color photograph. But, wait a minute, this means that we can plug in any photo from any phone or camera that we or someone else talk at any time. And I mean at any time, right? Just imagine taking a black and white photo of a historic event, colorizing it with the previous learning based method and passing this color image to this new method. And then this happens. My goodness. So all this looks and sounds great. But how long do we have to wait for such a 3D photo to be generated? Does my phone battery get completely drained by the time all this computation is done? What is your guess? Please stop the video and leave a comment with your guess. I'll wait. Alright, so is this a battery killer? Let's see. The depth estimation step takes, whoa! A quarter of a second, in painting, half a second, and after a little housekeeping, we find out that this is not a battery killer at all because the whole process is done in approximately one second. Holy matter of papers. I am very excited to see this technique out there in the wild as soon as possible. What you see here is a report of this exact paper we have talked about which was made by Wades and Biasis. I put a link to it in the description. Make sure to have a look. I think it helps you understand this paper better. If you work with learning algorithms on a regular basis, make sure to check out Wades and Biasis. Their system is designed to help you organize your experiments and it is so good it could shave off weeks or even months of work from your projects and is completely free for all individuals, academics, and open source projects. This really is as good as it gets. And it is hardly a surprise that they are now used by over 200 companies and research institutions. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to Wades and Biasis for their longstanding support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5, "text": " Dear Fellow Scholars, this is two minute papers with Dr. Karajjola Efehir."}, {"start": 4.5, "end": 8.22, "text": " This is a standard color photo made with a smartphone."}, {"start": 8.22, "end": 12.02, "text": " Hence, it contains only a two-dere presentation of the world,"}, {"start": 12.02, "end": 17.62, "text": " and when we look at it, our brain is able to reconstruct the 3D information from it."}, {"start": 17.62, "end": 23.38, "text": " And I wonder, would it be possible for an AI to do the same and go all the way"}, {"start": 23.38, "end": 28.060000000000002, "text": " and create a 3D version of this photo that we can rotate around?"}, {"start": 28.06, "end": 32.06, "text": " Well, this new learning based method promises exactly that,"}, {"start": 32.06, "end": 35.06, "text": " and if that is at all possible, even more."}, {"start": 35.06, "end": 40.06, "text": " These are big words, so let's have a look if it can indeed live up to its promise."}, {"start": 40.06, "end": 47.06, "text": " So, first, we take a photograph and we'll find out together in a moment what kind of phone is needed for this."}, {"start": 47.06, "end": 49.56, "text": " Probably an amazing one, right?"}, {"start": 49.56, "end": 55.06, "text": " For now, this will be the input, and now, let's see the 3D photo as an output."}, {"start": 55.06, "end": 60.06, "text": " Let's rotate this around, and wow, this is amazing."}, {"start": 60.06, "end": 66.06, "text": " And you know what is even more amazing, since pretty much every smartphone is equipped with a gyroscope,"}, {"start": 66.06, "end": 70.56, "text": " these photos can be rotated around in harmony with the rotation of our phones,"}, {"start": 70.56, "end": 75.06, "text": " and wait a second, is this some sort of misunderstanding,"}, {"start": 75.06, "end": 80.56, "text": " or do I see correctly that we can even look behind the human if we wanted to?"}, {"start": 80.56, "end": 84.56, "text": " That content was not even part of the original photo."}, {"start": 84.56, "end": 87.56, "text": " How does this work? More on that in a moment."}, {"start": 87.56, "end": 93.56, "text": " Also, just imagine putting on a pair of VR glasses and looking at a plane to the photo"}, {"start": 93.56, "end": 97.56, "text": " and get an experience as if we were really there."}, {"start": 97.56, "end": 101.06, "text": " It truly feels like we are living in a science fiction world."}, {"start": 101.06, "end": 104.56, "text": " If we grab our trusty smartphone and use these images,"}, {"start": 104.56, "end": 112.56, "text": " we can create a timeline full of these 3D photos and marvel at how beautifully we can scroll such a timeline here."}, {"start": 112.56, "end": 115.56, "text": " And now we have piled up quite a few questions here."}, {"start": 115.56, "end": 117.56, "text": " How is this wizardry possible?"}, {"start": 117.56, "end": 120.56, "text": " What kind of phone do we need for this?"}, {"start": 120.56, "end": 122.56, "text": " Do we need a depth sensor?"}, {"start": 122.56, "end": 124.56, "text": " Maybe even LIDAR."}, {"start": 124.56, "end": 127.56, "text": " Let's look under the hood and find out together."}, {"start": 127.56, "end": 129.56, "text": " This is the input."}, {"start": 129.56, "end": 133.56, "text": " One colored photograph that is expected, and let's continue."}, {"start": 133.56, "end": 137.56, "text": " Goodness, now this is unexpected."}, {"start": 137.56, "end": 140.56, "text": " The algorithm creates a depth map by itself."}, {"start": 140.56, "end": 146.56, "text": " This depth map tells the algorithm how far different parts of the image are from the camera."}, {"start": 146.56, "end": 149.56, "text": " Just look at how crisp the outlines are."}, {"start": 149.56, "end": 150.56, "text": " My goodness."}, {"start": 150.56, "end": 151.56, "text": " So good."}, {"start": 151.56, "end": 153.56, "text": " Then with this depth information,"}, {"start": 153.56, "end": 159.56, "text": " it now has an understanding of what is where in this image and creates these layers."}, {"start": 159.56, "end": 162.56, "text": " Which is, unfortunately, not much help."}, {"start": 162.56, "end": 167.56, "text": " As you remember, we don't have any information on what is behind the person."}, {"start": 167.56, "end": 172.56, "text": " No matter because we can use a technique that implements image in painting"}, {"start": 172.56, "end": 175.56, "text": " to fill in these regions with sensible data."}, {"start": 175.56, "end": 179.56, "text": " And now, with this, we can start exploring these 3D photos."}, {"start": 179.56, "end": 183.56, "text": " So if it created this depth map from the color information,"}, {"start": 183.56, "end": 187.56, "text": " this means that we don't even need a depth sensor for this."}, {"start": 187.56, "end": 190.56, "text": " Just a simple color photograph."}, {"start": 190.56, "end": 196.56, "text": " But, wait a minute, this means that we can plug in any photo from any phone or camera"}, {"start": 196.56, "end": 200.56, "text": " that we or someone else talk at any time."}, {"start": 200.56, "end": 203.56, "text": " And I mean at any time, right?"}, {"start": 203.56, "end": 207.56, "text": " Just imagine taking a black and white photo of a historic event,"}, {"start": 207.56, "end": 213.56, "text": " colorizing it with the previous learning based method and passing this color image to this new method."}, {"start": 213.56, "end": 215.56, "text": " And then this happens."}, {"start": 215.56, "end": 217.56, "text": " My goodness."}, {"start": 217.56, "end": 220.56, "text": " So all this looks and sounds great."}, {"start": 220.56, "end": 224.56, "text": " But how long do we have to wait for such a 3D photo to be generated?"}, {"start": 224.56, "end": 230.56, "text": " Does my phone battery get completely drained by the time all this computation is done?"}, {"start": 230.56, "end": 232.56, "text": " What is your guess?"}, {"start": 232.56, "end": 235.56, "text": " Please stop the video and leave a comment with your guess."}, {"start": 235.56, "end": 236.56, "text": " I'll wait."}, {"start": 236.56, "end": 239.56, "text": " Alright, so is this a battery killer?"}, {"start": 239.56, "end": 240.56, "text": " Let's see."}, {"start": 240.56, "end": 243.56, "text": " The depth estimation step takes, whoa!"}, {"start": 243.56, "end": 247.56, "text": " A quarter of a second, in painting, half a second,"}, {"start": 247.56, "end": 253.56, "text": " and after a little housekeeping, we find out that this is not a battery killer at all"}, {"start": 253.56, "end": 257.56, "text": " because the whole process is done in approximately one second."}, {"start": 257.56, "end": 259.56, "text": " Holy matter of papers."}, {"start": 259.56, "end": 265.56, "text": " I am very excited to see this technique out there in the wild as soon as possible."}, {"start": 265.56, "end": 269.56, "text": " What you see here is a report of this exact paper we have talked about"}, {"start": 269.56, "end": 271.56, "text": " which was made by Wades and Biasis."}, {"start": 271.56, "end": 273.56, "text": " I put a link to it in the description."}, {"start": 273.56, "end": 275.56, "text": " Make sure to have a look."}, {"start": 275.56, "end": 278.56, "text": " I think it helps you understand this paper better."}, {"start": 278.56, "end": 281.56, "text": " If you work with learning algorithms on a regular basis,"}, {"start": 281.56, "end": 284.56, "text": " make sure to check out Wades and Biasis."}, {"start": 284.56, "end": 287.56, "text": " Their system is designed to help you organize your experiments"}, {"start": 287.56, "end": 292.56, "text": " and it is so good it could shave off weeks or even months of work from your projects"}, {"start": 292.56, "end": 298.56, "text": " and is completely free for all individuals, academics, and open source projects."}, {"start": 298.56, "end": 300.56, "text": " This really is as good as it gets."}, {"start": 300.56, "end": 306.56, "text": " And it is hardly a surprise that they are now used by over 200 companies and research institutions."}, {"start": 306.56, "end": 313.56, "text": " Make sure to visit them through wnb.com slash papers or just click the link in the video description"}, {"start": 313.56, "end": 315.56, "text": " and you can get a free demo today."}, {"start": 315.56, "end": 318.56, "text": " Our thanks to Wades and Biasis for their longstanding support"}, {"start": 318.56, "end": 321.56, "text": " and for helping us make better videos for you."}, {"start": 321.56, "end": 345.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=s8Nm_ytwO6w
Soft Body Wiggles And Jiggles…Effortlessly! 🐘
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/getting-started/reports/Debug-Compare-Reproduce-Machine-Learning-Models--VmlldzoyNzY5MDk?utm_source=karoly 📝 The paper "Complementary Dynamics" is available here: https://www.dgp.toronto.edu/projects/complementary-dynamics/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: https://discordapp.com/invite/hbcTJu2 Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #wiggles #jiggles
Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajon Aifahir. I hope you like Wiggles and Jiggles, because today we are going to see a lot of them. You see, this technique promises to imbue a rigged animation with elastoplastic secondary effects. Now, if you tell this to a computer graphics researcher, they will be extremely happy to hear that this is finally possible, but what does this really mean? This beautiful fish animation is one of the finest demonstrations of the new method. So, what happened here? Well, the first part means that we modeled a piece of 3D geometry and we wish to make it move, but in order to make this movement believable, we have to specify where the bones and joints are located within the model. This process is called rigging and this model will be the input for it. We can make it move with traditional methods, well, kind of. You see that the bones and joints are working, but the model is still solid. For instance, look, the trunk and ears are both completely solid. This is not what we would expect to see from this kind of motion. So, can we do better than this? Well, hold on to your papers and let's see how this new technique enhances these animations. Oh yes, floppy ears. Also, the trunk is dangling everywhere. What a lovely animation. Prepare for a lot more rigels and jiggles. Now, I love how we can set up the material properties for this method. Of course, it cannot just make decisions by itself because it would compete with the artist's vision. The goal is always to enhance these animations in a way that gives artists more control over what happens. So, what about more elaborate models? Do they work too? Let's have a look and find out together. This is the traditional animation method. No deformation on the belly. Nostrils are not moving too much. So now, let's see the new method. Look at that. The face, ears, nose and mouth now show elastic movement. So cool. Even the belly is deforming as the model is running about. So, we already see that it can kind of deal with the forces that are present in the simulation. Let's give this aspect a closer look. This is the traditional method we are moving up and down. Up and down. All right, but look at the vectors here. There is an external force field or in simpler words, the wind is blowing. And unfortunately, not much is really happening to this model. But when we plug in this new technique, look, it finally responds to these external forces. And yes, as a reward, we get more vehicles. So, what else can this do? A lot more. For instance, it does not require this particular kind of rig with the bones and joints. This Hedgehog was instead rigged with two handles, which is much simpler. But unfortunately, when we start to move it with traditional techniques, well, the leg is kind of pinned to the ground where the remainder of the model moves. So, how did the new technique deal with this? Oh, yes, the animation is much more realistic and things are dangling around in a much more lively manner. The fact that it works on many different kinds of rigged models out there in the world boosters the usability of this technique a great deal. But we are not done yet. No, no, if you have been holding onto your paper so far, now squeeze that paper because we can take a scene, drop in a bunch of objects and expect a realistic output. But wait a second. If you have been watching the series for a while, you know for a fact that for this, we need to run an elaborate physics simulator. For instance, just look at this muscle simulation from earlier. And here's the key. This animation took over an hour for every second of video footage. That you see here. The new method does not need to compute a full-blown physics simulation to add this kind of elastic behavior and hence in many cases it runs in real time. And it works for all kinds of rigs out there in the world and we even have artistic control over the output. We don't need elaborate models and many hours of rigging and simulation to be able to create a beautiful animation anymore. What a time to be alive. This episode has been supported by weights and biases. In this post they show you how to debug and compare models by tracking predictions, hyper parameters, GPU usage and more. During my PhD studies I trained a ton of neural networks which were used in our experiments. However, over time there was just too much data in our repositories and what I am looking for is not data but insight. And that's exactly how weights and biases helps you by organizing your experiments. It is used by more than 200 companies and research institutions including OpenAI, Toyota Research, GitHub and more. And get this, weight and biases is free for all individuals, academics and open source projects. Make sure to visit them through wnb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for their long standing support and for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Dr. Karajon Aifahir."}, {"start": 4.48, "end": 9.92, "text": " I hope you like Wiggles and Jiggles, because today we are going to see a lot of them."}, {"start": 9.92, "end": 16.96, "text": " You see, this technique promises to imbue a rigged animation with elastoplastic secondary effects."}, {"start": 16.96, "end": 24.32, "text": " Now, if you tell this to a computer graphics researcher, they will be extremely happy to hear that this is finally possible,"}, {"start": 24.32, "end": 26.400000000000002, "text": " but what does this really mean?"}, {"start": 26.4, "end": 31.68, "text": " This beautiful fish animation is one of the finest demonstrations of the new method."}, {"start": 31.68, "end": 33.76, "text": " So, what happened here?"}, {"start": 33.76, "end": 40.16, "text": " Well, the first part means that we modeled a piece of 3D geometry and we wish to make it move,"}, {"start": 40.16, "end": 48.56, "text": " but in order to make this movement believable, we have to specify where the bones and joints are located within the model."}, {"start": 48.56, "end": 53.2, "text": " This process is called rigging and this model will be the input for it."}, {"start": 53.2, "end": 57.6, "text": " We can make it move with traditional methods, well, kind of."}, {"start": 57.6, "end": 63.2, "text": " You see that the bones and joints are working, but the model is still solid."}, {"start": 63.2, "end": 68.0, "text": " For instance, look, the trunk and ears are both completely solid."}, {"start": 68.0, "end": 71.60000000000001, "text": " This is not what we would expect to see from this kind of motion."}, {"start": 71.60000000000001, "end": 74.56, "text": " So, can we do better than this?"}, {"start": 74.56, "end": 81.2, "text": " Well, hold on to your papers and let's see how this new technique enhances these animations."}, {"start": 81.2, "end": 83.60000000000001, "text": " Oh yes, floppy ears."}, {"start": 83.60000000000001, "end": 86.4, "text": " Also, the trunk is dangling everywhere."}, {"start": 86.4, "end": 88.48, "text": " What a lovely animation."}, {"start": 88.48, "end": 91.44, "text": " Prepare for a lot more rigels and jiggles."}, {"start": 91.44, "end": 95.84, "text": " Now, I love how we can set up the material properties for this method."}, {"start": 95.84, "end": 101.60000000000001, "text": " Of course, it cannot just make decisions by itself because it would compete with the artist's vision."}, {"start": 101.60000000000001, "end": 108.64, "text": " The goal is always to enhance these animations in a way that gives artists more control over what happens."}, {"start": 108.64, "end": 111.44, "text": " So, what about more elaborate models?"}, {"start": 111.44, "end": 112.88, "text": " Do they work too?"}, {"start": 112.88, "end": 115.44, "text": " Let's have a look and find out together."}, {"start": 115.44, "end": 118.16, "text": " This is the traditional animation method."}, {"start": 118.16, "end": 120.16, "text": " No deformation on the belly."}, {"start": 120.16, "end": 122.48, "text": " Nostrils are not moving too much."}, {"start": 122.48, "end": 125.12, "text": " So now, let's see the new method."}, {"start": 125.12, "end": 126.72, "text": " Look at that."}, {"start": 126.72, "end": 133.36, "text": " The face, ears, nose and mouth now show elastic movement."}, {"start": 133.36, "end": 134.56, "text": " So cool."}, {"start": 134.56, "end": 138.4, "text": " Even the belly is deforming as the model is running about."}, {"start": 138.4, "end": 144.24, "text": " So, we already see that it can kind of deal with the forces that are present in the simulation."}, {"start": 144.24, "end": 146.96, "text": " Let's give this aspect a closer look."}, {"start": 146.96, "end": 150.72, "text": " This is the traditional method we are moving up and down."}, {"start": 150.72, "end": 152.24, "text": " Up and down."}, {"start": 152.24, "end": 154.64000000000001, "text": " All right, but look at the vectors here."}, {"start": 154.64000000000001, "end": 159.76, "text": " There is an external force field or in simpler words, the wind is blowing."}, {"start": 159.76, "end": 163.92000000000002, "text": " And unfortunately, not much is really happening to this model."}, {"start": 163.92000000000002, "end": 166.56, "text": " But when we plug in this new technique,"}, {"start": 166.56, "end": 170.4, "text": " look, it finally responds to these external forces."}, {"start": 170.4, "end": 173.28, "text": " And yes, as a reward, we get more vehicles."}, {"start": 175.04, "end": 177.2, "text": " So, what else can this do?"}, {"start": 177.2, "end": 178.16, "text": " A lot more."}, {"start": 178.16, "end": 183.12, "text": " For instance, it does not require this particular kind of rig with the bones and joints."}, {"start": 184.16, "end": 189.12, "text": " This Hedgehog was instead rigged with two handles, which is much simpler."}, {"start": 189.12, "end": 192.8, "text": " But unfortunately, when we start to move it with traditional techniques,"}, {"start": 192.8, "end": 198.8, "text": " well, the leg is kind of pinned to the ground where the remainder of the model moves."}, {"start": 199.44, "end": 202.0, "text": " So, how did the new technique deal with this?"}, {"start": 202.88000000000002, "end": 209.04000000000002, "text": " Oh, yes, the animation is much more realistic and things are dangling around in a much more"}, {"start": 209.04000000000002, "end": 214.08, "text": " lively manner. The fact that it works on many different kinds of rigged models out there in the"}, {"start": 214.08, "end": 217.76000000000002, "text": " world boosters the usability of this technique a great deal."}, {"start": 218.4, "end": 220.32000000000002, "text": " But we are not done yet."}, {"start": 220.32, "end": 226.32, "text": " No, no, if you have been holding onto your paper so far, now squeeze that paper because we can"}, {"start": 226.32, "end": 231.6, "text": " take a scene, drop in a bunch of objects and expect a realistic output."}, {"start": 232.32, "end": 236.4, "text": " But wait a second. If you have been watching the series for a while,"}, {"start": 236.4, "end": 241.28, "text": " you know for a fact that for this, we need to run an elaborate physics simulator."}, {"start": 241.28, "end": 244.79999999999998, "text": " For instance, just look at this muscle simulation from earlier."}, {"start": 244.79999999999998, "end": 250.24, "text": " And here's the key. This animation took over an hour for every second of video footage."}, {"start": 250.24, "end": 255.68, "text": " That you see here. The new method does not need to compute a full-blown physics simulation to add"}, {"start": 255.68, "end": 261.36, "text": " this kind of elastic behavior and hence in many cases it runs in real time."}, {"start": 261.36, "end": 267.2, "text": " And it works for all kinds of rigs out there in the world and we even have artistic control over"}, {"start": 267.2, "end": 272.96000000000004, "text": " the output. We don't need elaborate models and many hours of rigging and simulation to be able"}, {"start": 272.96000000000004, "end": 277.04, "text": " to create a beautiful animation anymore. What a time to be alive."}, {"start": 277.04, "end": 283.12, "text": " This episode has been supported by weights and biases. In this post they show you how to debug"}, {"start": 283.12, "end": 288.96000000000004, "text": " and compare models by tracking predictions, hyper parameters, GPU usage and more."}, {"start": 288.96000000000004, "end": 294.72, "text": " During my PhD studies I trained a ton of neural networks which were used in our experiments."}, {"start": 294.72, "end": 300.72, "text": " However, over time there was just too much data in our repositories and what I am looking for is"}, {"start": 300.72, "end": 307.28000000000003, "text": " not data but insight. And that's exactly how weights and biases helps you by organizing your"}, {"start": 307.28000000000003, "end": 313.52000000000004, "text": " experiments. It is used by more than 200 companies and research institutions including OpenAI,"}, {"start": 313.52000000000004, "end": 320.56, "text": " Toyota Research, GitHub and more. And get this, weight and biases is free for all individuals,"}, {"start": 320.56, "end": 328.16, "text": " academics and open source projects. Make sure to visit them through wnb.com slash papers or just"}, {"start": 328.16, "end": 333.36, "text": " click the link in the video description and you can get a free demo today. Our thanks to weights"}, {"start": 333.36, "end": 338.56, "text": " and biases for their long standing support and for helping us make better videos for you."}, {"start": 338.56, "end": 366.96, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=K940MNp7V8M
Simulating Honey And Hot Showers For Bunnies! 🍯🐰
❤️ Check out Weights & Biases and sign up for a free demo here: https://www.wandb.com/papers ❤️ Their mentioned post is available here: https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk?utm_source=karoly#System-4 📝 The paper "An Adaptive Variational Finite Difference Framework for Efficient Symmetric Octree Viscosity" is available here: https://cs.uwaterloo.ca/~rgoldade/adaptiveviscosity/ Houdini video: https://www.sidefx.com/products/whats-new/18_5_vfx/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-1958464/ Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #honeysim
Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir. Feast your eyes upon this simulation from our previous episode that showcases a high viscosity material that is honey. The fact that honey is viscous means that it is a material that is highly resistant against deformation. In simpler words, if we can simulate viscosity well, we can engage in the favorite pastimes of the computer graphics researcher, or in other words, take some of these letters, throw them around, watch them slowly lose their previous shapes, and then, of course, destroy them in a spectacular manner. I love making simulations like this, and when I do, I want to see a lot of detail, which unfortunately means that I also have to run these simulations for a long time. So, when I saw that this new technique promises to compute these about four times faster, it really grabbed my attention. Here is a visual demonstration of such a speed difference. Hmm, so, four times, you say, that sounds fantastic. But, what's the catch here? This kind of speed up usually comes with cutting corners. So, let's test the might of this new method through three examples of increasing difficulty. Experiment number one. Here is the regular simulation, and here is the new technique. So, let's see the quality differences. One more time. Well, I don't see any. Do you? Four times faster with no degradation in quality. Hmm, so far, so good. So now, let's give it a harder example. Experiment number two. Varying viscousities and temperatures. In other words, let's give this bunny a hot shower. That was beautiful, and our question is, again, how close is this to the slow reference simulation? Wow, this is really close. I have to look carefully to even have a fighting chance in finding a difference. Checkmark. And I also wonder, can it deal with extremely detailed simulations? And now, hold on to your papers for experiment number three, dropping a viscous bunny on thin wires, and just look at the remnants of the poor bunny stuck in there. Loving it. Now, for almost every episode of this paper, I get comments saying, Karoi, this is all great, but when do I get to use this? When does this make it to the real world? These questions are completely justified, and the answer is, right about now. You can use this right now. This paper was published in 2019, and now, it appears to be already part of Houdini, one of the industry standard programs for visual effects and physics simulations. Tech transfer in just one year. Wow! Huge congratulations to Ryan Godade and his colleagues for this incredible paper, and huge respect to the folks at Houdini, who keep outdoing themselves with these amazing updates. This episode has been supported by weights and biases. In this post, they show you how to monitor and optimize your GPU consumption during model training in real time with one line of code. Weight and biases provides tools to track your experiments in your deep learning projects. Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs, such as OpenAI, Toyota Research, GitHub, and more. And the best part is that weights and biases is free for all individuals, academics, and open source projects. It really is as good as it gets. Make sure to visit them through wnb.com slash papers, or just click the link in the video description, and you can get a free demo today. Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5, "text": " Dear Fellow Scholars, this is Two Minute Papers with Dr. Karo Zsolnai-Fehir."}, {"start": 4.5, "end": 12.8, "text": " Feast your eyes upon this simulation from our previous episode that showcases a high viscosity material that is honey."}, {"start": 12.8, "end": 19.3, "text": " The fact that honey is viscous means that it is a material that is highly resistant against deformation."}, {"start": 19.3, "end": 27.1, "text": " In simpler words, if we can simulate viscosity well, we can engage in the favorite pastimes of the computer graphics researcher,"}, {"start": 27.1, "end": 34.6, "text": " or in other words, take some of these letters, throw them around, watch them slowly lose their previous shapes,"}, {"start": 34.6, "end": 39.0, "text": " and then, of course, destroy them in a spectacular manner."}, {"start": 39.0, "end": 50.3, "text": " I love making simulations like this, and when I do, I want to see a lot of detail, which unfortunately means that I also have to run these simulations for a long time."}, {"start": 50.3, "end": 58.3, "text": " So, when I saw that this new technique promises to compute these about four times faster, it really grabbed my attention."}, {"start": 58.3, "end": 62.3, "text": " Here is a visual demonstration of such a speed difference."}, {"start": 62.3, "end": 67.8, "text": " Hmm, so, four times, you say, that sounds fantastic."}, {"start": 67.8, "end": 73.3, "text": " But, what's the catch here? This kind of speed up usually comes with cutting corners."}, {"start": 73.3, "end": 79.3, "text": " So, let's test the might of this new method through three examples of increasing difficulty."}, {"start": 79.3, "end": 85.8, "text": " Experiment number one. Here is the regular simulation, and here is the new technique."}, {"start": 85.8, "end": 89.3, "text": " So, let's see the quality differences."}, {"start": 89.3, "end": 94.3, "text": " One more time. Well, I don't see any. Do you?"}, {"start": 94.3, "end": 104.3, "text": " Four times faster with no degradation in quality. Hmm, so far, so good. So now, let's give it a harder example."}, {"start": 104.3, "end": 108.8, "text": " Experiment number two. Varying viscousities and temperatures."}, {"start": 108.8, "end": 112.8, "text": " In other words, let's give this bunny a hot shower."}, {"start": 112.8, "end": 120.3, "text": " That was beautiful, and our question is, again, how close is this to the slow reference simulation?"}, {"start": 120.3, "end": 128.3, "text": " Wow, this is really close. I have to look carefully to even have a fighting chance in finding a difference."}, {"start": 128.3, "end": 134.3, "text": " Checkmark. And I also wonder, can it deal with extremely detailed simulations?"}, {"start": 134.3, "end": 141.8, "text": " And now, hold on to your papers for experiment number three, dropping a viscous bunny on thin wires,"}, {"start": 141.8, "end": 146.3, "text": " and just look at the remnants of the poor bunny stuck in there."}, {"start": 146.3, "end": 151.8, "text": " Loving it. Now, for almost every episode of this paper, I get comments saying,"}, {"start": 151.8, "end": 156.3, "text": " Karoi, this is all great, but when do I get to use this?"}, {"start": 156.3, "end": 158.8, "text": " When does this make it to the real world?"}, {"start": 158.8, "end": 164.3, "text": " These questions are completely justified, and the answer is, right about now."}, {"start": 164.3, "end": 166.8, "text": " You can use this right now."}, {"start": 166.8, "end": 173.3, "text": " This paper was published in 2019, and now, it appears to be already part of Houdini,"}, {"start": 173.3, "end": 177.8, "text": " one of the industry standard programs for visual effects and physics simulations."}, {"start": 177.8, "end": 182.3, "text": " Tech transfer in just one year. Wow!"}, {"start": 182.3, "end": 190.3, "text": " Huge congratulations to Ryan Godade and his colleagues for this incredible paper, and huge respect to the folks at Houdini,"}, {"start": 190.3, "end": 194.3, "text": " who keep outdoing themselves with these amazing updates."}, {"start": 194.3, "end": 197.3, "text": " This episode has been supported by weights and biases."}, {"start": 197.3, "end": 202.3, "text": " In this post, they show you how to monitor and optimize your GPU consumption"}, {"start": 202.3, "end": 206.3, "text": " during model training in real time with one line of code."}, {"start": 206.3, "end": 211.3, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 211.3, "end": 218.3, "text": " Their system is designed to save you a ton of time and money, and it is actively used in projects at prestigious labs,"}, {"start": 218.3, "end": 222.3, "text": " such as OpenAI, Toyota Research, GitHub, and more."}, {"start": 222.3, "end": 228.8, "text": " And the best part is that weights and biases is free for all individuals, academics, and open source projects."}, {"start": 228.8, "end": 231.3, "text": " It really is as good as it gets."}, {"start": 231.3, "end": 238.3, "text": " Make sure to visit them through wnb.com slash papers, or just click the link in the video description,"}, {"start": 238.3, "end": 240.3, "text": " and you can get a free demo today."}, {"start": 240.3, "end": 246.3, "text": " Our thanks to weights and biases for their longstanding support, and for helping us make better videos for you."}, {"start": 246.3, "end": 273.3, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=fPrxiRceAac
These Are Pixels Made of Wood! 🌲🧩
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Computational Parquetry: Fabricated Style Transfer with Wood Pixels" is available here: https://light.informatik.uni-bonn.de/computational-parquetry-fabricated-style-transfer-with-wood-pixels/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Aleksandr Mashrabov, Alex Haro, Alex Serban, Alex Paden, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bruno Mikuš, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Lau, Eric Martel, Gordon Child, Haris Husic, Jace O'Brien, Javier Bustamante, Joshua Goller, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Michael Albrecht, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Ramsey Elbasheer, Robin Graham, Steef, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Tybie Fitzhugh. If you wish to support the series, click here: https://www.patreon.com/TwoMinutePapers Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifahid. Everybody loves style transfer. This is a task typically done with neural networks where we have two images, one for content and one for style, and the output is the content image reimagined with this new style. The cool thing is that the style can be a different photo, a famous painting, or even wooden patterns. Feast your eyes on these majestic images of this cat reimagined with wooden parketry with these previous methods. And now look at the result of this new technique that looks way nicer. Everything is in order here except one thing. And now hold on to your papers because this is not style transfer. Not at all. This is not a synthetic photo made by a neural network. This is a reproduction of this cat image by cutting wood slabs into tiny pieces and putting them together carefully. This is computational parketry. And here the key requirement is that if we look from afar, it looks like the target image, but if we zoom in, it gets abundantly clear that the puzzle pieces here are indeed made of real wood. And that is an excellent intuition for this work. It is kind of like image stylization, but done in the real world. Now that is extremely challenging. Why is that? Well, first there are lots of different kinds of wood types. Second, if this piece was not a physical object but an image, this job would not be that hard because we could add to it, clone it, and do all kinds of pixel magic to it. However, these are real physical pieces of wood, so we can do exactly none of that. The only thing we can do is take away from it and we have limitations even on that because we have to design it in a way that a CNC device should be able to cut these pieces. And third, you will see that initially nothing seems to work well. However, this technique does this with flying colors, so I wonder how does this really work? First, we can take a photo of the wood panels that we have at our disposal, decide how and where to cut, give these instructions to the CNC machine to perform the cutting, and now we have to assemble them in a way that it resembles the target image. Well, still, that's easier said than done. For instance, imagine that we have this target image and we have these wood panels. This doesn't look anything like that, so how could we possibly approximate it? If we try to match the colors of the tool, we get something that is too much in the middle, and these colors don't resemble any of the original inputs. Not good. Instead, the authors opted to transform both of them to grayscale and match not the colors, but the intensities of the colors instead. This seems a little more usable, until we realize that we still don't know what pieces to use and where. Look, here on the left you see how the image is being reproduced with the wood pieces, but we have to mind the fact that as soon as we cut out one piece of wood, it is not available anymore, so it has to be subtracted from our wood panel repository here. As our resources are constrained, depending on what order we put the pieces together, we may get a completely different result. But, look, there is still a problem. The left part of the suit gets a lot of detail, while the right part, not so much. I cannot judge which solution is better, less or more detail, but it needs to be a little more consistent over the image. Now you see that whatever we do, nothing seems to work well in the general case. Now, we could get a much better solution if we would run the algorithm with every possible starting point in the image and with every possible ordering of the wood pieces, but that would take longer than our lifetime to finish. So, what do we do? Well, the authors have two really cool heuristics to address this problem. First, we can start from the middle that usually gives us a reasonably good solution since the object of interest is often in the middle of the image and the good pieces are still available for it. Or, even better, if that does not work too well, we can look for salient regions. These are the places where there is a lot going on and try to fill them in first. As you see, both of these tricks seem to work quite well most of the time. Finally, something that works. And if you have been holding onto your paper so far, now squeeze that paper because this technique not only works, but provides us a great deal of artistic control over the results. Look at that. And that's not all. We can even control the resolution of the output, or we can create a hand-drawn geometry ourselves. I love how the authors took a really challenging problem when nothing really worked well. And still, they didn't stop until they absolutely nailed the solution. Congratulations! This episode has been supported by Lambda GPU Cloud. If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud. They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances. And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia. Plus, they are the only Cloud service with 48GB RTX 8000. Join researchers at organizations like Apple, MIT, and Caltech in using Lambda Cloud instances, workstations, or servers. Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today. Our thanks to Lambda for their long-standing support and for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Dr. Karajona Ifahid."}, {"start": 4.72, "end": 7.2, "text": " Everybody loves style transfer."}, {"start": 7.2, "end": 11.68, "text": " This is a task typically done with neural networks where we have two images,"}, {"start": 11.68, "end": 14.64, "text": " one for content and one for style,"}, {"start": 14.64, "end": 19.28, "text": " and the output is the content image reimagined with this new style."}, {"start": 19.28, "end": 22.88, "text": " The cool thing is that the style can be a different photo,"}, {"start": 22.88, "end": 26.8, "text": " a famous painting, or even wooden patterns."}, {"start": 26.8, "end": 32.56, "text": " Feast your eyes on these majestic images of this cat reimagined with wooden parketry"}, {"start": 32.56, "end": 34.4, "text": " with these previous methods."}, {"start": 34.4, "end": 38.72, "text": " And now look at the result of this new technique that looks way nicer."}, {"start": 39.44, "end": 42.56, "text": " Everything is in order here except one thing."}, {"start": 43.120000000000005, "end": 47.84, "text": " And now hold on to your papers because this is not style transfer."}, {"start": 47.84, "end": 48.72, "text": " Not at all."}, {"start": 48.72, "end": 52.24, "text": " This is not a synthetic photo made by a neural network."}, {"start": 52.24, "end": 56.400000000000006, "text": " This is a reproduction of this cat image by cutting wood slabs"}, {"start": 56.4, "end": 59.92, "text": " into tiny pieces and putting them together carefully."}, {"start": 60.56, "end": 62.72, "text": " This is computational parketry."}, {"start": 63.36, "end": 67.12, "text": " And here the key requirement is that if we look from afar,"}, {"start": 67.12, "end": 70.4, "text": " it looks like the target image, but if we zoom in,"}, {"start": 70.4, "end": 76.16, "text": " it gets abundantly clear that the puzzle pieces here are indeed made of real wood."}, {"start": 76.16, "end": 79.2, "text": " And that is an excellent intuition for this work."}, {"start": 79.2, "end": 84.08, "text": " It is kind of like image stylization, but done in the real world."}, {"start": 84.08, "end": 86.88, "text": " Now that is extremely challenging."}, {"start": 86.88, "end": 88.4, "text": " Why is that?"}, {"start": 88.4, "end": 92.0, "text": " Well, first there are lots of different kinds of wood types."}, {"start": 92.0, "end": 96.24, "text": " Second, if this piece was not a physical object but an image,"}, {"start": 96.24, "end": 100.96, "text": " this job would not be that hard because we could add to it, clone it,"}, {"start": 100.96, "end": 104.0, "text": " and do all kinds of pixel magic to it."}, {"start": 104.0, "end": 107.6, "text": " However, these are real physical pieces of wood,"}, {"start": 107.6, "end": 110.32, "text": " so we can do exactly none of that."}, {"start": 110.32, "end": 116.0, "text": " The only thing we can do is take away from it and we have limitations even on that"}, {"start": 116.0, "end": 121.44, "text": " because we have to design it in a way that a CNC device should be able to cut these pieces."}, {"start": 121.44, "end": 126.0, "text": " And third, you will see that initially nothing seems to work well."}, {"start": 126.0, "end": 129.68, "text": " However, this technique does this with flying colors,"}, {"start": 129.68, "end": 132.16, "text": " so I wonder how does this really work?"}, {"start": 132.16, "end": 137.04, "text": " First, we can take a photo of the wood panels that we have at our disposal,"}, {"start": 137.04, "end": 144.16, "text": " decide how and where to cut, give these instructions to the CNC machine to perform the cutting,"}, {"start": 144.16, "end": 148.88, "text": " and now we have to assemble them in a way that it resembles the target image."}, {"start": 148.88, "end": 152.64, "text": " Well, still, that's easier said than done."}, {"start": 152.64, "end": 158.0, "text": " For instance, imagine that we have this target image and we have these wood panels."}, {"start": 158.0, "end": 160.88, "text": " This doesn't look anything like that,"}, {"start": 160.88, "end": 163.76, "text": " so how could we possibly approximate it?"}, {"start": 163.76, "end": 169.84, "text": " If we try to match the colors of the tool, we get something that is too much in the middle,"}, {"start": 169.84, "end": 173.2, "text": " and these colors don't resemble any of the original inputs."}, {"start": 174.07999999999998, "end": 174.64, "text": " Not good."}, {"start": 175.2, "end": 181.84, "text": " Instead, the authors opted to transform both of them to grayscale and match not the colors,"}, {"start": 181.84, "end": 184.95999999999998, "text": " but the intensities of the colors instead."}, {"start": 184.95999999999998, "end": 187.04, "text": " This seems a little more usable,"}, {"start": 187.04, "end": 192.0, "text": " until we realize that we still don't know what pieces to use and where."}, {"start": 192.0, "end": 198.08, "text": " Look, here on the left you see how the image is being reproduced with the wood pieces,"}, {"start": 198.08, "end": 203.44, "text": " but we have to mind the fact that as soon as we cut out one piece of wood,"}, {"start": 203.44, "end": 205.36, "text": " it is not available anymore,"}, {"start": 205.36, "end": 210.24, "text": " so it has to be subtracted from our wood panel repository here."}, {"start": 210.24, "end": 215.76, "text": " As our resources are constrained, depending on what order we put the pieces together,"}, {"start": 215.76, "end": 218.56, "text": " we may get a completely different result."}, {"start": 218.56, "end": 222.56, "text": " But, look, there is still a problem."}, {"start": 222.56, "end": 225.92000000000002, "text": " The left part of the suit gets a lot of detail,"}, {"start": 225.92000000000002, "end": 228.4, "text": " while the right part, not so much."}, {"start": 229.04, "end": 233.28, "text": " I cannot judge which solution is better, less or more detail,"}, {"start": 233.28, "end": 236.4, "text": " but it needs to be a little more consistent over the image."}, {"start": 237.04, "end": 241.76, "text": " Now you see that whatever we do, nothing seems to work well in the general case."}, {"start": 242.4, "end": 247.6, "text": " Now, we could get a much better solution if we would run the algorithm with every possible"}, {"start": 247.6, "end": 252.23999999999998, "text": " starting point in the image and with every possible ordering of the wood pieces,"}, {"start": 252.23999999999998, "end": 255.84, "text": " but that would take longer than our lifetime to finish."}, {"start": 255.84, "end": 257.2, "text": " So, what do we do?"}, {"start": 257.84, "end": 262.15999999999997, "text": " Well, the authors have two really cool heuristics to address this problem."}, {"start": 262.8, "end": 267.6, "text": " First, we can start from the middle that usually gives us a reasonably good solution"}, {"start": 267.6, "end": 272.96, "text": " since the object of interest is often in the middle of the image and the good pieces are still"}, {"start": 272.96, "end": 280.23999999999995, "text": " available for it. Or, even better, if that does not work too well, we can look for salient regions."}, {"start": 280.23999999999995, "end": 284.71999999999997, "text": " These are the places where there is a lot going on and try to fill them in first."}, {"start": 285.44, "end": 290.0, "text": " As you see, both of these tricks seem to work quite well most of the time."}, {"start": 290.88, "end": 293.2, "text": " Finally, something that works."}, {"start": 293.84, "end": 297.03999999999996, "text": " And if you have been holding onto your paper so far,"}, {"start": 297.03999999999996, "end": 301.28, "text": " now squeeze that paper because this technique not only works,"}, {"start": 301.28, "end": 305.44, "text": " but provides us a great deal of artistic control over the results."}, {"start": 306.23999999999995, "end": 312.32, "text": " Look at that. And that's not all. We can even control the resolution of the output,"}, {"start": 312.32, "end": 315.76, "text": " or we can create a hand-drawn geometry ourselves."}, {"start": 316.32, "end": 322.0, "text": " I love how the authors took a really challenging problem when nothing really worked well."}, {"start": 322.0, "end": 326.88, "text": " And still, they didn't stop until they absolutely nailed the solution."}, {"start": 326.88, "end": 328.32, "text": " Congratulations!"}, {"start": 328.32, "end": 331.84, "text": " This episode has been supported by Lambda GPU Cloud."}, {"start": 331.84, "end": 337.76, "text": " If you're looking for inexpensive Cloud GPUs for AI, check out Lambda GPU Cloud."}, {"start": 337.76, "end": 344.64, "text": " They've recently launched Quadro RTX 6000, RTX 8000, and V100 instances."}, {"start": 344.64, "end": 352.24, "text": " And hold onto your papers because Lambda GPU Cloud can cost less than half of AWS and Asia."}, {"start": 352.24, "end": 357.52, "text": " Plus, they are the only Cloud service with 48GB RTX 8000."}, {"start": 357.52, "end": 362.08, "text": " Join researchers at organizations like Apple, MIT, and Caltech"}, {"start": 362.08, "end": 365.84, "text": " in using Lambda Cloud instances, workstations, or servers."}, {"start": 365.84, "end": 372.56, "text": " Make sure to go to LambdaLabs.com slash papers to sign up for one of their amazing GPU instances today."}, {"start": 372.56, "end": 375.35999999999996, "text": " Our thanks to Lambda for their long-standing support"}, {"start": 375.35999999999996, "end": 378.24, "text": " and for helping us make better videos for you."}, {"start": 378.24, "end": 390.48, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]