CHANNEL_NAME
stringclasses
1 value
URL
stringlengths
43
43
TITLE
stringlengths
19
90
DESCRIPTION
stringlengths
475
4.65k
TRANSCRIPTION
stringlengths
0
20.1k
SEGMENTS
stringlengths
2
30.8k
Two Minute Papers
https://www.youtube.com/watch?v=9b2dxc1QalM
OpenAI’s Robot Hand Won't Stop Rotating The Rubik’s Cube 👋
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers The mentioned #OpenAI blog post on the gradients and its notebook are available here: Post: https://www.wandb.com/articles/exploring-gradients Notebook: https://colab.research.google.com/drive/1bsoWY8g0DkxAzVEXRigrdqRZlq44QwmQ 📝 The paper "Solving Rubik’s Cubewith a Robot Hand" is available here: https://openai.com/blog/solving-rubiks-cube/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dan Kennedy, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. Today, we are going to talk about OpenAI's robot hand that dexterously manipulates and solves Rubik's cube. Here you can marvel at this majestic result. Now, why did I use the term dexterously manipulates Rubik's cube? In this project, there are two problems to solve. One, finding out what kind of rotation we need to get closer to a soft cube, and two, listing the finger positions to be able to execute these prescribed rotations. And this paper is about the letter, which means that the rotation sequences are given by a previously existing algorithm, and OpenAI's method manipulates the hand to be able to follow this algorithm. To rephrase it, the robot hand doesn't really know how to solve the cube and is told what to do, and the contribution lies in the robot figuring out how to execute these rotations. If you take only one thing from this video, let it be this thought. Now, to perform all this, we have to first solve a problem in a computer simulation, where we can learn and iterate quickly, and then transfer everything the agent learned there to the real world, and hope that it obtained general knowledge that indeed can be applied there. This task is one of my favorites. However, no simulation is as detailed as the real world, and as every experienced student knows very well, things that are written in the textbook might not always work exactly the same in practice. So the problem formulation naturally emerges. Our job is to prepare this AI in this simulation, so it becomes good enough to perform well in the real world. Well, good news. First, let's think about the fact that in a simulation, we can train much faster, as we are not bound by the physical limits of the robot hand. In a simulation, we are bound by our processing power, which is much, much more vast, and is growing every day. So this means that the simulated environments can be as grueling as we can make them be, what's even more, we can do something that OpenAI refers to as automatic domain randomization. This is one of the key contributions of this paper. The domain randomization part means that it creates a large number of random environments, each of which are a little different, and the AI is meant to learn how to account for these differences and hopefully, as a result, obtain general knowledge about our world. The automatic part is responsible for detecting how much randomization the neural network can shoulder and hence the difficulty of these random environments is increased over time. So how good are the results? Well, spectacular. In fact, hold onto your papers because it can not only dexterously manipulate and solve the cube, but we can even hamstring the hand in many different ways and it will still be able to do well. And I am telling you, scientists at OpenAI got very creative in tormenting this little hand. They added a rubber glove, tied multiple fingers together, threw a blanket on it, and pushed it around with a plastic jar of paint. It still worked. This is a testament to the usefulness of the mentioned automatic domain randomization technique. What's more, if you have a look at the paper, you will even see how well it was able to recover from a randomly breaking joint. What a time to be alive. As always, some limitations apply. The hand is only able to solve the cube about 60% of the time for simpler cases and the success rate drops to 20% for the most difficult ones. If it gets stuck, it typically does in the first few rotations. But so far, we have been able to do this 0% of the time. And given that the first steps towards cracking the problem are almost always the hardest, I have no doubt that two more papers down the line, this will become significantly more reliable. But, you know what? We are talking about OpenAI. Make it one paper. This episode has been supported by weights and biases. weights and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in this project and is being used by OpenAI, Toyota Research, Stanford and Berkeley. Here you see a right up of theirs where they explain how to visualize the gradients running through your models and illustrate it through the example of predicting protein structure. They also have a live example that you can try. Make sure to visit them through WendeeB.com slash papers, W-A-D-B.com slash papers, or just click the link in the video description and you can get a free demo today. Or thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.6000000000000005, "end": 9.92, "text": " Today, we are going to talk about OpenAI's robot hand that dexterously manipulates and"}, {"start": 9.92, "end": 12.08, "text": " solves Rubik's cube."}, {"start": 12.08, "end": 14.76, "text": " Here you can marvel at this majestic result."}, {"start": 14.76, "end": 19.64, "text": " Now, why did I use the term dexterously manipulates Rubik's cube?"}, {"start": 19.64, "end": 22.84, "text": " In this project, there are two problems to solve."}, {"start": 22.84, "end": 29.400000000000002, "text": " One, finding out what kind of rotation we need to get closer to a soft cube, and two,"}, {"start": 29.4, "end": 34.199999999999996, "text": " listing the finger positions to be able to execute these prescribed rotations."}, {"start": 34.199999999999996, "end": 38.839999999999996, "text": " And this paper is about the letter, which means that the rotation sequences are given by"}, {"start": 38.839999999999996, "end": 44.12, "text": " a previously existing algorithm, and OpenAI's method manipulates the hand to be able to"}, {"start": 44.12, "end": 45.959999999999994, "text": " follow this algorithm."}, {"start": 45.959999999999994, "end": 50.68, "text": " To rephrase it, the robot hand doesn't really know how to solve the cube and is told"}, {"start": 50.68, "end": 56.44, "text": " what to do, and the contribution lies in the robot figuring out how to execute these"}, {"start": 56.44, "end": 57.76, "text": " rotations."}, {"start": 57.76, "end": 61.599999999999994, "text": " If you take only one thing from this video, let it be this thought."}, {"start": 61.599999999999994, "end": 67.36, "text": " Now, to perform all this, we have to first solve a problem in a computer simulation, where"}, {"start": 67.36, "end": 73.0, "text": " we can learn and iterate quickly, and then transfer everything the agent learned there"}, {"start": 73.0, "end": 78.28, "text": " to the real world, and hope that it obtained general knowledge that indeed can be applied"}, {"start": 78.28, "end": 79.28, "text": " there."}, {"start": 79.28, "end": 81.47999999999999, "text": " This task is one of my favorites."}, {"start": 81.47999999999999, "end": 86.68, "text": " However, no simulation is as detailed as the real world, and as every experienced student"}, {"start": 86.68, "end": 91.84, "text": " knows very well, things that are written in the textbook might not always work exactly"}, {"start": 91.84, "end": 93.60000000000001, "text": " the same in practice."}, {"start": 93.60000000000001, "end": 96.88000000000001, "text": " So the problem formulation naturally emerges."}, {"start": 96.88000000000001, "end": 102.76, "text": " Our job is to prepare this AI in this simulation, so it becomes good enough to perform well"}, {"start": 102.76, "end": 103.96000000000001, "text": " in the real world."}, {"start": 103.96000000000001, "end": 105.44000000000001, "text": " Well, good news."}, {"start": 105.44000000000001, "end": 110.92000000000002, "text": " First, let's think about the fact that in a simulation, we can train much faster, as"}, {"start": 110.92000000000002, "end": 114.28, "text": " we are not bound by the physical limits of the robot hand."}, {"start": 114.28, "end": 120.16, "text": " In a simulation, we are bound by our processing power, which is much, much more vast, and"}, {"start": 120.16, "end": 122.24, "text": " is growing every day."}, {"start": 122.24, "end": 127.76, "text": " So this means that the simulated environments can be as grueling as we can make them be,"}, {"start": 127.76, "end": 134.16, "text": " what's even more, we can do something that OpenAI refers to as automatic domain randomization."}, {"start": 134.16, "end": 136.88, "text": " This is one of the key contributions of this paper."}, {"start": 136.88, "end": 142.28, "text": " The domain randomization part means that it creates a large number of random environments,"}, {"start": 142.28, "end": 147.6, "text": " each of which are a little different, and the AI is meant to learn how to account for"}, {"start": 147.6, "end": 153.92000000000002, "text": " these differences and hopefully, as a result, obtain general knowledge about our world."}, {"start": 153.92000000000002, "end": 159.0, "text": " The automatic part is responsible for detecting how much randomization the neural network"}, {"start": 159.0, "end": 165.24, "text": " can shoulder and hence the difficulty of these random environments is increased over time."}, {"start": 165.24, "end": 167.76, "text": " So how good are the results?"}, {"start": 167.76, "end": 170.04, "text": " Well, spectacular."}, {"start": 170.04, "end": 174.84, "text": " In fact, hold onto your papers because it can not only dexterously manipulate and solve"}, {"start": 174.84, "end": 179.92, "text": " the cube, but we can even hamstring the hand in many different ways and it will still"}, {"start": 179.92, "end": 181.76, "text": " be able to do well."}, {"start": 181.76, "end": 187.07999999999998, "text": " And I am telling you, scientists at OpenAI got very creative in tormenting this little"}, {"start": 187.07999999999998, "end": 188.07999999999998, "text": " hand."}, {"start": 188.07999999999998, "end": 194.76, "text": " They added a rubber glove, tied multiple fingers together, threw a blanket on it, and pushed"}, {"start": 194.76, "end": 197.68, "text": " it around with a plastic jar of paint."}, {"start": 197.68, "end": 199.28, "text": " It still worked."}, {"start": 199.28, "end": 204.44, "text": " This is a testament to the usefulness of the mentioned automatic domain randomization technique."}, {"start": 204.44, "end": 209.52, "text": " What's more, if you have a look at the paper, you will even see how well it was able"}, {"start": 209.52, "end": 212.96, "text": " to recover from a randomly breaking joint."}, {"start": 212.96, "end": 214.76, "text": " What a time to be alive."}, {"start": 214.76, "end": 217.28, "text": " As always, some limitations apply."}, {"start": 217.28, "end": 223.04, "text": " The hand is only able to solve the cube about 60% of the time for simpler cases and the"}, {"start": 223.04, "end": 227.0, "text": " success rate drops to 20% for the most difficult ones."}, {"start": 227.0, "end": 230.32, "text": " If it gets stuck, it typically does in the first few rotations."}, {"start": 230.32, "end": 235.04, "text": " But so far, we have been able to do this 0% of the time."}, {"start": 235.04, "end": 240.0, "text": " And given that the first steps towards cracking the problem are almost always the hardest,"}, {"start": 240.0, "end": 244.84, "text": " I have no doubt that two more papers down the line, this will become significantly more"}, {"start": 244.84, "end": 245.84, "text": " reliable."}, {"start": 245.84, "end": 247.6, "text": " But, you know what?"}, {"start": 247.6, "end": 249.76, "text": " We are talking about OpenAI."}, {"start": 249.76, "end": 251.32, "text": " Make it one paper."}, {"start": 251.32, "end": 254.52, "text": " This episode has been supported by weights and biases."}, {"start": 254.52, "end": 259.2, "text": " weights and biases provides tools to track your experiments in your deep learning projects."}, {"start": 259.2, "end": 264.92, "text": " It can save you a ton of time and money in this project and is being used by OpenAI, Toyota"}, {"start": 264.92, "end": 267.68, "text": " Research, Stanford and Berkeley."}, {"start": 267.68, "end": 272.28000000000003, "text": " Here you see a right up of theirs where they explain how to visualize the gradients running"}, {"start": 272.28000000000003, "end": 277.92, "text": " through your models and illustrate it through the example of predicting protein structure."}, {"start": 277.92, "end": 280.44, "text": " They also have a live example that you can try."}, {"start": 280.44, "end": 287.88, "text": " Make sure to visit them through WendeeB.com slash papers, W-A-D-B.com slash papers, or just"}, {"start": 287.88, "end": 291.8, "text": " click the link in the video description and you can get a free demo today."}, {"start": 291.8, "end": 295.84, "text": " Or thanks to weights and biases for helping us make better videos for you."}, {"start": 295.84, "end": 325.79999999999995, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=cTqVhcrilrE
This AI Learned To Animate Humanoids!🚶
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Neural State Machine for Character-Scene Interactions" is available here: https://github.com/sebastianstarke/AI4Animation 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #GameDev
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. If we have an animation movie or a computer game with quadrupeds and we are yearning for really high quality, life-like animations, motion capture is often the go-to tool for the job. Motion capture means that we put an actor, in our case a dog in the studio, we ask it to perform sitting, trotting, pacing and jumping, record this motion and transfer it onto our virtual character. In an earlier work, a learning-based technique was introduced by the name Mode Adaptive Neural Network and it was able to correctly weave together these previously recorded motions and not only that, but it also addressed these unnatural sliding motions that were produced by previous works. As you see here, it also worked well on more challenging landscapes. We talked about this paper approximately 100 videos or, in other words, a little more than a year ago and I noted that it was scientifically interesting, it was evaluated well, it had all the ingredients for a truly excellent paper. But one thing was missing. So, what is that one thing? Well, we haven't seen the characters interacting with the scene itself. If you like this previous paper, you are going to be elated by this new one because this new work is from the very same group and goes by the name Neural State Machine and introduces character-seen interactions for bipeds. Now, we suddenly jumped from a quadruped paper to a biped one and the reason for this is that because I was looking to introduce the concept of food-sliding, which will be measured later for this new method too. Stay tuned. So, in this new problem formulation, we need to guide the character to a challenging and state, for instance, sitting in a chair while being able to maneuver through all kinds of geometry. We'll use the chair example a fair bit in the next minute or two. So, I'll stress that this can do a whole lot more, the chair is just used as a vehicle to get a taste of how this technique works. But the end state needn't just be some kind of chair. It can be any chair. This chair may have all kinds of different heights and shapes and the agent has to be able to change the animations and stitch them together correctly regardless of the geometry. To achieve this, the authors propose an interesting new data augmentation model. Since we are working with Neural Networks, we already have a training set to teach it about motion and data augmentation means that we extend this data set with lots and lots of new information to make the AI generalize better to unseen real world examples. So, how is this done here exactly? Well, the authors proposed a clever idea to do this. Let's walk through their five prescribed steps. One, let's use motion capture data, have the subject sit down and see what the contact points are when it happens. Two, we then record the curves that describe the entirety of the motion of sitting down. So far so good, but we are not interested in one kind of chair. We wanted to sit into all kinds of chairs, so three, generate a large selection of different geometries and adjust the location of these contact points accordingly. Four, change the motion curves so they indeed end at the new transformed contact points. And five, move the joints of the character to make it follow this motion curve and compute the evolution of the character pose. And then pair up this motion with the chair geometry and chuck it into the new augmented training set. Now make no mistake, the paper contains much, much more than this, so make sure to have a look in the video description. So what do we get for all this work? Well, have a look at this trembling character from a previous paper and now look at the new synthesized motions. Natural, smooth, creamy and I don't see artifacts. Also, here you see some results that measure the amount of food sliding during these animations, which is subject to minimization. That means that the smaller the bars are, the better. With NSM, you see how this neural state machine method produces much less than previous methods. And now we see how cool it is that we talked about the quadruped paper as well because we see that it even beats the M A and N, the mode adaptive neural networks from the previous paper. That one had very little food sliding and apparently it can be improved by quite a bit. The positional and rotational errors in the animation it offers are also by far the lowest of the bunch. Since it works in real time, it can also be used for computer games and virtual reality applications. And all this improvement within one year of work. What a time to be alive. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com slash papers and sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.38, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.38, "end": 9.22, "text": " If we have an animation movie or a computer game with quadrupeds and we are yearning for"}, {"start": 9.22, "end": 14.84, "text": " really high quality, life-like animations, motion capture is often the go-to tool for the"}, {"start": 14.84, "end": 16.0, "text": " job."}, {"start": 16.0, "end": 21.12, "text": " Motion capture means that we put an actor, in our case a dog in the studio, we ask it to"}, {"start": 21.12, "end": 27.16, "text": " perform sitting, trotting, pacing and jumping, record this motion and transfer it onto our"}, {"start": 27.16, "end": 28.72, "text": " virtual character."}, {"start": 28.72, "end": 33.8, "text": " In an earlier work, a learning-based technique was introduced by the name Mode Adaptive Neural"}, {"start": 33.8, "end": 39.44, "text": " Network and it was able to correctly weave together these previously recorded motions"}, {"start": 39.44, "end": 44.519999999999996, "text": " and not only that, but it also addressed these unnatural sliding motions that were produced"}, {"start": 44.519999999999996, "end": 51.64, "text": " by previous works."}, {"start": 51.64, "end": 55.8, "text": " As you see here, it also worked well on more challenging landscapes."}, {"start": 55.8, "end": 61.239999999999995, "text": " We talked about this paper approximately 100 videos or, in other words, a little more"}, {"start": 61.239999999999995, "end": 67.78, "text": " than a year ago and I noted that it was scientifically interesting, it was evaluated well, it had"}, {"start": 67.78, "end": 71.24, "text": " all the ingredients for a truly excellent paper."}, {"start": 71.24, "end": 73.4, "text": " But one thing was missing."}, {"start": 73.4, "end": 75.75999999999999, "text": " So, what is that one thing?"}, {"start": 75.75999999999999, "end": 80.8, "text": " Well, we haven't seen the characters interacting with the scene itself."}, {"start": 80.8, "end": 85.4, "text": " If you like this previous paper, you are going to be elated by this new one because this"}, {"start": 85.4, "end": 91.4, "text": " new work is from the very same group and goes by the name Neural State Machine and introduces"}, {"start": 91.4, "end": 94.52000000000001, "text": " character-seen interactions for bipeds."}, {"start": 94.52000000000001, "end": 99.88000000000001, "text": " Now, we suddenly jumped from a quadruped paper to a biped one and the reason for this is"}, {"start": 99.88000000000001, "end": 104.72, "text": " that because I was looking to introduce the concept of food-sliding, which will be measured"}, {"start": 104.72, "end": 107.2, "text": " later for this new method too."}, {"start": 107.2, "end": 108.2, "text": " Stay tuned."}, {"start": 108.2, "end": 112.84, "text": " So, in this new problem formulation, we need to guide the character to a challenging and"}, {"start": 112.84, "end": 117.96000000000001, "text": " state, for instance, sitting in a chair while being able to maneuver through all kinds"}, {"start": 117.96000000000001, "end": 118.96000000000001, "text": " of geometry."}, {"start": 118.96000000000001, "end": 122.56, "text": " We'll use the chair example a fair bit in the next minute or two."}, {"start": 122.56, "end": 128.32, "text": " So, I'll stress that this can do a whole lot more, the chair is just used as a vehicle"}, {"start": 128.32, "end": 131.16, "text": " to get a taste of how this technique works."}, {"start": 131.16, "end": 134.8, "text": " But the end state needn't just be some kind of chair."}, {"start": 134.8, "end": 136.64000000000001, "text": " It can be any chair."}, {"start": 136.64000000000001, "end": 140.96, "text": " This chair may have all kinds of different heights and shapes and the agent has to be able"}, {"start": 140.96, "end": 146.64000000000001, "text": " to change the animations and stitch them together correctly regardless of the geometry."}, {"start": 146.64000000000001, "end": 152.08, "text": " To achieve this, the authors propose an interesting new data augmentation model."}, {"start": 152.08, "end": 156.08, "text": " Since we are working with Neural Networks, we already have a training set to teach it"}, {"start": 156.08, "end": 161.76000000000002, "text": " about motion and data augmentation means that we extend this data set with lots and lots"}, {"start": 161.76000000000002, "end": 167.8, "text": " of new information to make the AI generalize better to unseen real world examples."}, {"start": 167.8, "end": 170.52, "text": " So, how is this done here exactly?"}, {"start": 170.52, "end": 174.56, "text": " Well, the authors proposed a clever idea to do this."}, {"start": 174.56, "end": 177.4, "text": " Let's walk through their five prescribed steps."}, {"start": 177.4, "end": 183.32000000000002, "text": " One, let's use motion capture data, have the subject sit down and see what the contact"}, {"start": 183.32000000000002, "end": 185.52, "text": " points are when it happens."}, {"start": 185.52, "end": 191.44, "text": " Two, we then record the curves that describe the entirety of the motion of sitting down."}, {"start": 191.44, "end": 195.52, "text": " So far so good, but we are not interested in one kind of chair."}, {"start": 195.52, "end": 201.08, "text": " We wanted to sit into all kinds of chairs, so three, generate a large selection of different"}, {"start": 201.08, "end": 205.56, "text": " geometries and adjust the location of these contact points accordingly."}, {"start": 205.56, "end": 212.04000000000002, "text": " Four, change the motion curves so they indeed end at the new transformed contact points."}, {"start": 212.04000000000002, "end": 217.56, "text": " And five, move the joints of the character to make it follow this motion curve and compute"}, {"start": 217.56, "end": 220.16000000000003, "text": " the evolution of the character pose."}, {"start": 220.16, "end": 225.48, "text": " And then pair up this motion with the chair geometry and chuck it into the new augmented"}, {"start": 225.48, "end": 226.96, "text": " training set."}, {"start": 226.96, "end": 232.07999999999998, "text": " Now make no mistake, the paper contains much, much more than this, so make sure to have"}, {"start": 232.07999999999998, "end": 234.2, "text": " a look in the video description."}, {"start": 234.2, "end": 236.92, "text": " So what do we get for all this work?"}, {"start": 236.92, "end": 242.64, "text": " Well, have a look at this trembling character from a previous paper and now look at the"}, {"start": 242.64, "end": 245.0, "text": " new synthesized motions."}, {"start": 245.0, "end": 249.84, "text": " Natural, smooth, creamy and I don't see artifacts."}, {"start": 249.84, "end": 255.4, "text": " Also, here you see some results that measure the amount of food sliding during these animations,"}, {"start": 255.4, "end": 257.84000000000003, "text": " which is subject to minimization."}, {"start": 257.84000000000003, "end": 261.08, "text": " That means that the smaller the bars are, the better."}, {"start": 261.08, "end": 267.32, "text": " With NSM, you see how this neural state machine method produces much less than previous methods."}, {"start": 267.32, "end": 272.32, "text": " And now we see how cool it is that we talked about the quadruped paper as well because"}, {"start": 272.32, "end": 278.0, "text": " we see that it even beats the M A and N, the mode adaptive neural networks from the previous"}, {"start": 278.0, "end": 279.08, "text": " paper."}, {"start": 279.08, "end": 284.91999999999996, "text": " That one had very little food sliding and apparently it can be improved by quite a bit."}, {"start": 284.91999999999996, "end": 290.56, "text": " The positional and rotational errors in the animation it offers are also by far the lowest"}, {"start": 290.56, "end": 291.84, "text": " of the bunch."}, {"start": 291.84, "end": 296.71999999999997, "text": " Since it works in real time, it can also be used for computer games and virtual reality"}, {"start": 296.71999999999997, "end": 298.08, "text": " applications."}, {"start": 298.08, "end": 301.76, "text": " And all this improvement within one year of work."}, {"start": 301.76, "end": 303.44, "text": " What a time to be alive."}, {"start": 303.44, "end": 308.84, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 308.84, "end": 311.47999999999996, "text": " check out Lambda GPU Cloud."}, {"start": 311.47999999999996, "end": 316.47999999999996, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 316.47999999999996, "end": 319.96, "text": " that they are offering GPU cloud services as well."}, {"start": 319.96, "end": 327.64, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 327.64, "end": 332.91999999999996, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 332.91999999999996, "end": 338.67999999999995, "text": " And finally, hold onto your papers because the Lambda GPU Cloud costs less than half"}, {"start": 338.68, "end": 341.12, "text": " of AWS and Azure."}, {"start": 341.12, "end": 346.32, "text": " Make sure to go to LambdaLabs.com slash papers and sign up for one of their amazing GPU"}, {"start": 346.32, "end": 348.0, "text": " instances today."}, {"start": 348.0, "end": 377.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=0sR1rU3gLzQ
Google's AI Clones Your Voice After Listening for 5 Seconds! 🤐
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers The shown blog post is available here: https://www.wandb.com/articles/fundamentals-of-neural-networks 📝 The paper "Transfer Learning from Speaker Verification to Multispeaker Text-To-Speech Synthesis" and audio samples are available here: https://arxiv.org/abs/1806.04558 https://google.github.io/tacotron/publications/speaker_adaptation/ An unofficial implementation of this paper is available here. Note that this was not made by the authors of the original paper and may contain deviations from the described technique - please judge its results accordingly! https://github.com/CorentinJ/Real-Time-Voice-Cloning 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Benji Rabhan, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #VoiceCloning #Google
Dear Fellow Scholars, this is Two Minute Papers with Kato Zsolnai-Fehir. Today, we are going to listen to some amazing improvements in the area of AI-based voice cloning. For instance, if someone wanted to clone my voice, there are hours and hours of my recordings on YouTube and elsewhere, they could do it with previously existing techniques. But the question today is, if we had even more advanced methods to do this, how big of a sound sample would we really need for this? Do we need a few hours? A few minutes? The answer is no. Not at all. Hold on to your papers because this new technique only requires five seconds. Let's listen to a couple examples. The Norsemen considered the rainbow as a bridge over which the gods passed from Earth to their home in the sky. Take a look at these pages for Cricut Creek Drive. There are several listings for gas station. Here's the forecast for the next four days. These take the shape of a long round arch with its path high above and its two ends apparently beyond the horizon. Take a look at these pages for Cricut Creek Drive. There are several listings for gas station. Here's the forecast for the next four days. Absolutely incredible. The timbre of the voice is very similar and it is able to synthesize sounds and consonants that have to be inferred because they were not heard in the original voice sample. This requires a certain kind of intelligence and quite a bit of that. So while we are at that, how does this new system work? Well, it requires three components. One, the speaker encoder is a neural network that was trained on thousands and thousands of speakers and is meant to squeeze all this learned data into a compressed representation. In other words, it tries to learn the essence of human speech from many, many speakers. To clarify, I will add that this system listens to thousands of people talking to learn the intricacies of human speech, but this training step needs to be done only once and after that it was allowed just five seconds of speech data from someone they haven't heard of previously and later the synthesis takes place using these five seconds as an input. Two, we have a synthesizer that takes text as an input. This is what we would like our test subject to say and it gives us a mouse spectrogram which is a concise representation of someone's voice and the intonation. The implementation of this module is based on DeepMind's TECOTRON-2 technique and here you can see an example of this mouse spectrogram built for a male and two female speakers. On the left, we have the spectrograms for the reference recordings, the voice samples if you will and on the right, we specify a piece of text that we would like the learning algorithm to add and it produces these corresponding synthesized spectrograms. But eventually we would like to listen to something and for that we need a waveform as an output. So, the third element is thus a neural vocoder that does exactly that and this component is implemented by DeepMind's wavenet technique. This is the architecture that led to these amazing examples. So how do we measure exactly how amazing it is? When we have a solution evaluating it is also anything but trivial. In principle we are looking for a result that is both close to the recording that we have of the target person but says something completely different and all this in a natural manner. This naturalness and similarity can be measured but we are not nearly done yet because the problem gets even more difficult. For instance it matters how we fit the three puzzle pieces together and then what data we train it on of course also matters a great deal. Here you see that if we train on a one data set and test the results against a different one and then swap the two and the results in naturalness and similarity will differ significantly. The paper contains a very detailed evaluation section that explains how to deal with these difficulties. The Inopinion score is measured in this section which is a number that describes how well a sound sample would pass as genuine human speech. And we haven't even talked about the speaker verification part so make sure to have a look at the paper. So indeed we can clone each other's voice by using a sample of only 5 seconds. What a time to be alive. This episode has been supported by weights and biases. weights and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. They also wrote a guide on the fundamentals of neural networks where they explain in simple terms how to train a neural network properly, what are the most common errors you can make and how to fix them. It is really great you got to have a look. So make sure to visit them through when db.com slash papers w a and db.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kato Zsolnai-Fehir."}, {"start": 4.4, "end": 10.0, "text": " Today, we are going to listen to some amazing improvements in the area of AI-based voice"}, {"start": 10.0, "end": 11.0, "text": " cloning."}, {"start": 11.0, "end": 15.96, "text": " For instance, if someone wanted to clone my voice, there are hours and hours of my recordings"}, {"start": 15.96, "end": 20.64, "text": " on YouTube and elsewhere, they could do it with previously existing techniques."}, {"start": 20.64, "end": 25.72, "text": " But the question today is, if we had even more advanced methods to do this, how big of"}, {"start": 25.72, "end": 28.68, "text": " a sound sample would we really need for this?"}, {"start": 28.68, "end": 30.44, "text": " Do we need a few hours?"}, {"start": 30.44, "end": 31.84, "text": " A few minutes?"}, {"start": 31.84, "end": 33.44, "text": " The answer is no."}, {"start": 33.44, "end": 34.44, "text": " Not at all."}, {"start": 34.44, "end": 39.84, "text": " Hold on to your papers because this new technique only requires five seconds."}, {"start": 39.84, "end": 42.92, "text": " Let's listen to a couple examples."}, {"start": 42.92, "end": 46.72, "text": " The Norsemen considered the rainbow as a bridge over which the gods passed from Earth to"}, {"start": 46.72, "end": 49.16, "text": " their home in the sky."}, {"start": 49.16, "end": 53.0, "text": " Take a look at these pages for Cricut Creek Drive."}, {"start": 53.0, "end": 56.96, "text": " There are several listings for gas station."}, {"start": 56.96, "end": 60.76, "text": " Here's the forecast for the next four days."}, {"start": 60.76, "end": 64.88, "text": " These take the shape of a long round arch with its path high above and its two ends apparently"}, {"start": 64.88, "end": 67.48, "text": " beyond the horizon."}, {"start": 67.48, "end": 71.2, "text": " Take a look at these pages for Cricut Creek Drive."}, {"start": 71.2, "end": 74.36, "text": " There are several listings for gas station."}, {"start": 74.36, "end": 78.2, "text": " Here's the forecast for the next four days."}, {"start": 78.2, "end": 79.2, "text": " Absolutely incredible."}, {"start": 79.2, "end": 85.24000000000001, "text": " The timbre of the voice is very similar and it is able to synthesize sounds and consonants"}, {"start": 85.24, "end": 89.8, "text": " that have to be inferred because they were not heard in the original voice sample."}, {"start": 89.8, "end": 94.11999999999999, "text": " This requires a certain kind of intelligence and quite a bit of that."}, {"start": 94.11999999999999, "end": 97.88, "text": " So while we are at that, how does this new system work?"}, {"start": 97.88, "end": 101.28, "text": " Well, it requires three components."}, {"start": 101.28, "end": 106.6, "text": " One, the speaker encoder is a neural network that was trained on thousands and thousands"}, {"start": 106.6, "end": 113.0, "text": " of speakers and is meant to squeeze all this learned data into a compressed representation."}, {"start": 113.0, "end": 118.68, "text": " In other words, it tries to learn the essence of human speech from many, many speakers."}, {"start": 118.68, "end": 123.96000000000001, "text": " To clarify, I will add that this system listens to thousands of people talking to learn the"}, {"start": 123.96000000000001, "end": 129.84, "text": " intricacies of human speech, but this training step needs to be done only once and after"}, {"start": 129.84, "end": 135.32, "text": " that it was allowed just five seconds of speech data from someone they haven't heard of"}, {"start": 135.32, "end": 141.24, "text": " previously and later the synthesis takes place using these five seconds as an input."}, {"start": 141.24, "end": 145.48000000000002, "text": " Two, we have a synthesizer that takes text as an input."}, {"start": 145.48000000000002, "end": 150.4, "text": " This is what we would like our test subject to say and it gives us a mouse spectrogram"}, {"start": 150.4, "end": 154.68, "text": " which is a concise representation of someone's voice and the intonation."}, {"start": 154.68, "end": 159.92000000000002, "text": " The implementation of this module is based on DeepMind's TECOTRON-2 technique and here"}, {"start": 159.92000000000002, "end": 165.8, "text": " you can see an example of this mouse spectrogram built for a male and two female speakers."}, {"start": 165.8, "end": 170.08, "text": " On the left, we have the spectrograms for the reference recordings, the voice samples"}, {"start": 170.08, "end": 174.72, "text": " if you will and on the right, we specify a piece of text that we would like the learning"}, {"start": 174.72, "end": 180.48000000000002, "text": " algorithm to add and it produces these corresponding synthesized spectrograms."}, {"start": 180.48000000000002, "end": 186.32000000000002, "text": " But eventually we would like to listen to something and for that we need a waveform as an output."}, {"start": 186.32000000000002, "end": 192.52, "text": " So, the third element is thus a neural vocoder that does exactly that and this component"}, {"start": 192.52, "end": 196.24, "text": " is implemented by DeepMind's wavenet technique."}, {"start": 196.24, "end": 200.04000000000002, "text": " This is the architecture that led to these amazing examples."}, {"start": 200.04, "end": 203.92, "text": " So how do we measure exactly how amazing it is?"}, {"start": 203.92, "end": 208.48, "text": " When we have a solution evaluating it is also anything but trivial."}, {"start": 208.48, "end": 213.39999999999998, "text": " In principle we are looking for a result that is both close to the recording that we have"}, {"start": 213.39999999999998, "end": 220.28, "text": " of the target person but says something completely different and all this in a natural manner."}, {"start": 220.28, "end": 225.51999999999998, "text": " This naturalness and similarity can be measured but we are not nearly done yet because the"}, {"start": 225.51999999999998, "end": 228.28, "text": " problem gets even more difficult."}, {"start": 228.28, "end": 233.96, "text": " For instance it matters how we fit the three puzzle pieces together and then what data"}, {"start": 233.96, "end": 237.84, "text": " we train it on of course also matters a great deal."}, {"start": 237.84, "end": 242.44, "text": " Here you see that if we train on a one data set and test the results against a different"}, {"start": 242.44, "end": 251.4, "text": " one and then swap the two and the results in naturalness and similarity will differ significantly."}, {"start": 251.4, "end": 256.52, "text": " The paper contains a very detailed evaluation section that explains how to deal with these"}, {"start": 256.52, "end": 257.52, "text": " difficulties."}, {"start": 257.52, "end": 262.47999999999996, "text": " The Inopinion score is measured in this section which is a number that describes how well"}, {"start": 262.47999999999996, "end": 266.24, "text": " a sound sample would pass as genuine human speech."}, {"start": 266.24, "end": 270.32, "text": " And we haven't even talked about the speaker verification part so make sure to have a look"}, {"start": 270.32, "end": 271.59999999999997, "text": " at the paper."}, {"start": 271.59999999999997, "end": 277.59999999999997, "text": " So indeed we can clone each other's voice by using a sample of only 5 seconds."}, {"start": 277.59999999999997, "end": 279.47999999999996, "text": " What a time to be alive."}, {"start": 279.47999999999996, "end": 283.08, "text": " This episode has been supported by weights and biases."}, {"start": 283.08, "end": 288.0, "text": " weights and biases provides tools to track your experiments in your deep learning projects."}, {"start": 288.0, "end": 293.91999999999996, "text": " It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota"}, {"start": 293.91999999999996, "end": 296.71999999999997, "text": " Research, Stanford and Berkeley."}, {"start": 296.71999999999997, "end": 301.12, "text": " They also wrote a guide on the fundamentals of neural networks where they explain in"}, {"start": 301.12, "end": 306.52, "text": " simple terms how to train a neural network properly, what are the most common errors you"}, {"start": 306.52, "end": 309.24, "text": " can make and how to fix them."}, {"start": 309.24, "end": 311.52, "text": " It is really great you got to have a look."}, {"start": 311.52, "end": 318.08, "text": " So make sure to visit them through when db.com slash papers w a and db.com slash papers"}, {"start": 318.08, "end": 322.59999999999997, "text": " or just click the link in the video description and you can get a free demo today."}, {"start": 322.59999999999997, "end": 326.4, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 326.4, "end": 356.35999999999996, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=FZZ9rpmVCqE
Ken Burns Effect, Now In 3D!
❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 📝 The paper "3D Ken Burns Effect from a Single Image" is available here: https://arxiv.org/abs/1909.05483 The paper with the Microplanet scene at the start is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ Scene geometry: Marekv Image credits: Ian D. Keating, Kirk Lougheed (Link: https://www.flickr.com/photos/kirklougheed/36766944501 ), Leif Skandsen, Oliver Wang, Ben Abel, Aurel Manea, Jocelyn Erskine-Kellie, Jaisri Lingappa, and Intiaz Rahim. 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Have you heard of the Can Burns effect? If you have been watching this channel, you have probably seen examples where a still image is shown and a zooming and panning effect is added to it. It looks something like this. Familiar, right? The fact that there is some motion is indeed pleasing for the eye, but something is missing. Since we are doing this with 2D images, all the depth information is lost, so we are missing out on the motion parallax effect that we will see in real life when moving around the camera. So in short, this is only 2D. Can this be done in 3D? Well, to find out, have a look at this. Wow, I love it. Much better, right? Well, if we would try to perform something like this without this paper, we'd be met with bad news. And that bad news is that we have to buy an RGBD camera. This kind of camera endows the 2D image with depth information, which is specialized hardware that is likely not available in our phones as of the making of this video. Now since depth estimation, from this simple, monocular 2D images without depth data is a research field of its own, the first step sounds simple enough. Take one of those neural networks, then ask it to try to guess the depth of each pixel. Does this work? Well, let's have a look. As we move our imaginary camera around, oh oh, this is not looking good. Do you see what the problems are here? Problem number 1 is the presence of geometric distortions. You see it if you look here. Problem number 2 is referred to as semantic distortion in the paper, or in other words, we now have missing data. But this poor, tiny human's hand is also… Ouch. Let's look at something else instead. If we start zooming in into images, which is a hallmark of the Can Burns effect, it gets even worse. We get artifacts. So how does this new paper address these issues? After creating the first, coarse depth map, an additional step is taken to alleviate the semantic distortion issue, and then this depth information is up-sampled to make sure that we have enough fine details to perform the 3D Can Burns effect. Let's do that. Unfortunately, we are still nowhere near done yet. We have previously occluded parts of the background suddenly become visible, and we have no information about those. So, how can we address that? Do you remember image in painting? I hope so, but if not, no matter, I'll quickly explain what it is. Both learning based and traditional handcrafted algorithms exist to try to fill in dismissing information in images with sensible data by looking at its surroundings. This is also not as trivial as it might seem first, for instance, just filling in sensible data is not enough, because this time around we are synthesizing videos, it has to be temporally coherent, which means that there must not be too much of a change from one frame to another, or else we'll get a flickering effect. As a result, we finally have these results that are not only absolutely beautiful, but the user study in the paper shows that they stack up against handcrafted results made by real artists. How cool is that? It also opens up really cool workflows that would normally be very difficult if not impossible to perform. For instance, here you see that we can freeze this lightning bolt in time, zoom around and marvel at the entire landscape. Love it. Of course, limitations still apply if we have really thin objects such as this flagpole, it might be missing entirely from the death map, or there are also cases where the image in painter cannot fill in useful information. I cannot wait to see how this work evolves a couple of papers down the line. One more interesting tidbit, if you have a look at the paper, make sure to open it in an Adobe reader, you will likely be very surprised to see that many of these things that you think are still images are actually animations. Papers are not only getting more mind-blowing by the day, but also more informative and beautiful as well. What a time to be alive. This video has been supported by you on Patreon. If you wish to support the series and also pick up cool perks in return like early access to these episodes, or getting your name immortalized in the video description, make sure to visit us through patreon.com slash 2 minute papers. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 6.98, "text": " Have you heard of the Can Burns effect?"}, {"start": 6.98, "end": 11.72, "text": " If you have been watching this channel, you have probably seen examples where a still image"}, {"start": 11.72, "end": 16.44, "text": " is shown and a zooming and panning effect is added to it."}, {"start": 16.44, "end": 19.0, "text": " It looks something like this."}, {"start": 19.0, "end": 20.5, "text": " Familiar, right?"}, {"start": 20.5, "end": 26.68, "text": " The fact that there is some motion is indeed pleasing for the eye, but something is missing."}, {"start": 26.68, "end": 31.72, "text": " Since we are doing this with 2D images, all the depth information is lost, so we are missing"}, {"start": 31.72, "end": 37.88, "text": " out on the motion parallax effect that we will see in real life when moving around the camera."}, {"start": 37.88, "end": 41.019999999999996, "text": " So in short, this is only 2D."}, {"start": 41.019999999999996, "end": 43.32, "text": " Can this be done in 3D?"}, {"start": 43.32, "end": 47.519999999999996, "text": " Well, to find out, have a look at this."}, {"start": 47.519999999999996, "end": 51.120000000000005, "text": " Wow, I love it."}, {"start": 51.120000000000005, "end": 52.44, "text": " Much better, right?"}, {"start": 52.44, "end": 57.879999999999995, "text": " Well, if we would try to perform something like this without this paper, we'd be met with"}, {"start": 57.879999999999995, "end": 59.12, "text": " bad news."}, {"start": 59.12, "end": 63.44, "text": " And that bad news is that we have to buy an RGBD camera."}, {"start": 63.44, "end": 69.28, "text": " This kind of camera endows the 2D image with depth information, which is specialized hardware"}, {"start": 69.28, "end": 74.03999999999999, "text": " that is likely not available in our phones as of the making of this video."}, {"start": 74.03999999999999, "end": 80.16, "text": " Now since depth estimation, from this simple, monocular 2D images without depth data is"}, {"start": 80.16, "end": 84.64, "text": " a research field of its own, the first step sounds simple enough."}, {"start": 84.64, "end": 91.2, "text": " Take one of those neural networks, then ask it to try to guess the depth of each pixel."}, {"start": 91.2, "end": 92.2, "text": " Does this work?"}, {"start": 92.2, "end": 94.36, "text": " Well, let's have a look."}, {"start": 94.36, "end": 100.16, "text": " As we move our imaginary camera around, oh oh, this is not looking good."}, {"start": 100.16, "end": 102.67999999999999, "text": " Do you see what the problems are here?"}, {"start": 102.67999999999999, "end": 106.24, "text": " Problem number 1 is the presence of geometric distortions."}, {"start": 106.24, "end": 114.47999999999999, "text": " You see it if you look here."}, {"start": 114.47999999999999, "end": 120.19999999999999, "text": " Problem number 2 is referred to as semantic distortion in the paper, or in other words,"}, {"start": 120.19999999999999, "end": 122.84, "text": " we now have missing data."}, {"start": 122.84, "end": 125.88, "text": " But this poor, tiny human's hand is also\u2026"}, {"start": 125.88, "end": 126.88, "text": " Ouch."}, {"start": 126.88, "end": 129.24, "text": " Let's look at something else instead."}, {"start": 129.24, "end": 134.64, "text": " If we start zooming in into images, which is a hallmark of the Can Burns effect, it gets"}, {"start": 134.64, "end": 135.92, "text": " even worse."}, {"start": 135.92, "end": 137.92, "text": " We get artifacts."}, {"start": 137.92, "end": 141.51999999999998, "text": " So how does this new paper address these issues?"}, {"start": 141.51999999999998, "end": 146.23999999999998, "text": " After creating the first, coarse depth map, an additional step is taken to alleviate the"}, {"start": 146.23999999999998, "end": 152.83999999999997, "text": " semantic distortion issue, and then this depth information is up-sampled to make sure that"}, {"start": 152.83999999999997, "end": 157.67999999999998, "text": " we have enough fine details to perform the 3D Can Burns effect."}, {"start": 157.67999999999998, "end": 158.95999999999998, "text": " Let's do that."}, {"start": 158.95999999999998, "end": 162.76, "text": " Unfortunately, we are still nowhere near done yet."}, {"start": 162.76, "end": 168.6, "text": " We have previously occluded parts of the background suddenly become visible, and we have no information"}, {"start": 168.6, "end": 169.6, "text": " about those."}, {"start": 169.6, "end": 172.6, "text": " So, how can we address that?"}, {"start": 172.6, "end": 175.16, "text": " Do you remember image in painting?"}, {"start": 175.16, "end": 180.56, "text": " I hope so, but if not, no matter, I'll quickly explain what it is."}, {"start": 180.56, "end": 186.2, "text": " Both learning based and traditional handcrafted algorithms exist to try to fill in dismissing"}, {"start": 186.2, "end": 191.51999999999998, "text": " information in images with sensible data by looking at its surroundings."}, {"start": 191.52, "end": 196.88000000000002, "text": " This is also not as trivial as it might seem first, for instance, just filling in sensible"}, {"start": 196.88000000000002, "end": 202.84, "text": " data is not enough, because this time around we are synthesizing videos, it has to be"}, {"start": 202.84, "end": 208.04000000000002, "text": " temporally coherent, which means that there must not be too much of a change from one frame"}, {"start": 208.04000000000002, "end": 212.8, "text": " to another, or else we'll get a flickering effect."}, {"start": 212.8, "end": 218.16000000000003, "text": " As a result, we finally have these results that are not only absolutely beautiful, but"}, {"start": 218.16, "end": 223.56, "text": " the user study in the paper shows that they stack up against handcrafted results made"}, {"start": 223.56, "end": 225.35999999999999, "text": " by real artists."}, {"start": 225.35999999999999, "end": 227.32, "text": " How cool is that?"}, {"start": 227.32, "end": 232.88, "text": " It also opens up really cool workflows that would normally be very difficult if not impossible"}, {"start": 232.88, "end": 234.04, "text": " to perform."}, {"start": 234.04, "end": 239.35999999999999, "text": " For instance, here you see that we can freeze this lightning bolt in time, zoom around"}, {"start": 239.35999999999999, "end": 242.6, "text": " and marvel at the entire landscape."}, {"start": 242.6, "end": 244.4, "text": " Love it."}, {"start": 244.4, "end": 250.64000000000001, "text": " Of course, limitations still apply if we have really thin objects such as this flagpole,"}, {"start": 250.64000000000001, "end": 256.24, "text": " it might be missing entirely from the death map, or there are also cases where the image"}, {"start": 256.24, "end": 259.72, "text": " in painter cannot fill in useful information."}, {"start": 259.72, "end": 264.6, "text": " I cannot wait to see how this work evolves a couple of papers down the line."}, {"start": 264.6, "end": 269.08, "text": " One more interesting tidbit, if you have a look at the paper, make sure to open it in"}, {"start": 269.08, "end": 274.59999999999997, "text": " an Adobe reader, you will likely be very surprised to see that many of these things that you think"}, {"start": 274.59999999999997, "end": 278.64, "text": " are still images are actually animations."}, {"start": 278.64, "end": 283.44, "text": " Papers are not only getting more mind-blowing by the day, but also more informative and"}, {"start": 283.44, "end": 285.12, "text": " beautiful as well."}, {"start": 285.12, "end": 286.76, "text": " What a time to be alive."}, {"start": 286.76, "end": 289.76, "text": " This video has been supported by you on Patreon."}, {"start": 289.76, "end": 295.03999999999996, "text": " If you wish to support the series and also pick up cool perks in return like early access"}, {"start": 295.04, "end": 300.6, "text": " to these episodes, or getting your name immortalized in the video description, make sure to visit"}, {"start": 300.6, "end": 304.36, "text": " us through patreon.com slash 2 minute papers."}, {"start": 304.36, "end": 333.36, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Z6iTo7KY7lw
Can an AI Learn The Concept of Pose And Appearance? 👱‍♀️
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "HoloGAN: Unsupervised learning of 3D representations from natural images" is available here: https://www.monkeyoverflow.com/#/hologan-unsupervised-learning-of-3d-representations-from-natural-images/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
This episode has been supported by Lambda. Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. I apologize for my voice today. I am trapped in this frail human body, and sometimes it photos. But you remember from the previous episode, the papers must go on. In the last few years, we have seen a bunch of new AI-based techniques that were specialized in generating new and novel images. This is mainly done through learning-based techniques, typically a generative adversarial network, again, in short, which is an architecture where a generator neural network creates new images and passes it to a discriminator network which learns to distinguish real photos from these fake generated images. The two networks learn and improve together so much so that many of these techniques have become so realistic that with some times can't even tell they are synthetic images unless we look really closely. You see some examples here from BigGan, a previous technique that is based on this architecture. Now, normally, if we are looking to generate a specific human face, we have to generate hundreds and hundreds of these images, and our best bet is to hope that sooner or later we'll find something that we were looking for. So, of course, scientists were interested in trying to exert control over the outputs and with follow-up works, we can kind of control the appearance, but in return, we have to accept the pose in which they are given. And this new project is about teaching a learning algorithm to separate pose from identity. Now, that sounds kind of possible with proper supervision. What does this mean exactly? Well, we have to train these gans on a large number of images so they can learn what the human face looks like, what landmarks to expect, and how to form them properly when generating new images. However, when the input images are given with different poses, we will normally need to add additional information to the discriminator that describes the rotations of these people and objects. Well, hold on to your papers because that is exactly what is not happening in this new work. This paper proposes an architecture that contains a 3D transform and a projection unit. You see them here with red and blue, and this help us in separating pose and identity. As a result, we have much finer artistic control over these during image generation. That is amazing. So as you see here, it enables a really nice workflow where we can also set up the poses. Don't like the camera position for this generated bedroom? No problem. Need to rotate the chairs? No problem. And we are not even finished yet because when we set up the pose correctly, we are not stuck with these images. We can also choose from several different appearances. And all this comes from the fact that this technique was able to learn the intricacies of these objects. Love it. Now, it is abundantly clear that as we rotate these cars or change the camera view point for the bedroom, a flickering effect is still present. And this is how research works. We try to solve a new problem one step at a time. Then we find flaws in the solution and improve upon that. As a result, we always say two more papers down the line and will likely have smooth and creamy transitions between the images. The Lambda Sponsorship Spot is coming in a moment. And I don't know if you have noticed at the start, but they were also part of this research project as well. I think that it is as relevant of a sponsor as it gets. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's Web-based IDE lets you easily access your instance right in your browser. And finally, hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com slash papers and sign up for their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 2.58, "text": " This episode has been supported by Lambda."}, {"start": 2.58, "end": 6.62, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 6.62, "end": 8.66, "text": " I apologize for my voice today."}, {"start": 8.66, "end": 13.02, "text": " I am trapped in this frail human body, and sometimes it photos."}, {"start": 13.02, "end": 17.02, "text": " But you remember from the previous episode, the papers must go on."}, {"start": 17.02, "end": 21.1, "text": " In the last few years, we have seen a bunch of new AI-based techniques"}, {"start": 21.1, "end": 24.7, "text": " that were specialized in generating new and novel images."}, {"start": 24.7, "end": 27.3, "text": " This is mainly done through learning-based techniques,"}, {"start": 27.3, "end": 31.580000000000002, "text": " typically a generative adversarial network, again, in short,"}, {"start": 31.580000000000002, "end": 36.1, "text": " which is an architecture where a generator neural network creates new images"}, {"start": 36.1, "end": 38.74, "text": " and passes it to a discriminator network"}, {"start": 38.74, "end": 43.78, "text": " which learns to distinguish real photos from these fake generated images."}, {"start": 43.78, "end": 47.82, "text": " The two networks learn and improve together so much so"}, {"start": 47.82, "end": 50.980000000000004, "text": " that many of these techniques have become so realistic"}, {"start": 50.980000000000004, "end": 54.78, "text": " that with some times can't even tell they are synthetic images"}, {"start": 54.78, "end": 57.22, "text": " unless we look really closely."}, {"start": 57.22, "end": 59.5, "text": " You see some examples here from BigGan,"}, {"start": 59.5, "end": 62.58, "text": " a previous technique that is based on this architecture."}, {"start": 62.58, "end": 66.58, "text": " Now, normally, if we are looking to generate a specific human face,"}, {"start": 66.58, "end": 69.82, "text": " we have to generate hundreds and hundreds of these images,"}, {"start": 69.82, "end": 73.3, "text": " and our best bet is to hope that sooner or later"}, {"start": 73.3, "end": 75.86, "text": " we'll find something that we were looking for."}, {"start": 75.86, "end": 81.34, "text": " So, of course, scientists were interested in trying to exert control over the outputs"}, {"start": 81.34, "end": 85.46000000000001, "text": " and with follow-up works, we can kind of control the appearance,"}, {"start": 85.46, "end": 90.1, "text": " but in return, we have to accept the pose in which they are given."}, {"start": 90.1, "end": 93.74, "text": " And this new project is about teaching a learning algorithm"}, {"start": 93.74, "end": 96.89999999999999, "text": " to separate pose from identity."}, {"start": 96.89999999999999, "end": 101.33999999999999, "text": " Now, that sounds kind of possible with proper supervision."}, {"start": 101.33999999999999, "end": 103.13999999999999, "text": " What does this mean exactly?"}, {"start": 103.13999999999999, "end": 106.82, "text": " Well, we have to train these gans on a large number of images"}, {"start": 106.82, "end": 109.53999999999999, "text": " so they can learn what the human face looks like,"}, {"start": 109.53999999999999, "end": 113.13999999999999, "text": " what landmarks to expect, and how to form them properly"}, {"start": 113.13999999999999, "end": 115.1, "text": " when generating new images."}, {"start": 115.1, "end": 119.33999999999999, "text": " However, when the input images are given with different poses,"}, {"start": 119.33999999999999, "end": 123.74, "text": " we will normally need to add additional information to the discriminator"}, {"start": 123.74, "end": 127.74, "text": " that describes the rotations of these people and objects."}, {"start": 127.74, "end": 133.54, "text": " Well, hold on to your papers because that is exactly what is not happening in this new work."}, {"start": 133.54, "end": 137.9, "text": " This paper proposes an architecture that contains a 3D transform"}, {"start": 137.9, "end": 139.26, "text": " and a projection unit."}, {"start": 139.26, "end": 141.74, "text": " You see them here with red and blue,"}, {"start": 141.74, "end": 145.9, "text": " and this help us in separating pose and identity."}, {"start": 145.9, "end": 151.58, "text": " As a result, we have much finer artistic control over these during image generation."}, {"start": 151.58, "end": 153.5, "text": " That is amazing."}, {"start": 153.5, "end": 157.26000000000002, "text": " So as you see here, it enables a really nice workflow"}, {"start": 157.26000000000002, "end": 159.74, "text": " where we can also set up the poses."}, {"start": 159.74, "end": 162.98000000000002, "text": " Don't like the camera position for this generated bedroom?"}, {"start": 162.98000000000002, "end": 164.18, "text": " No problem."}, {"start": 164.18, "end": 165.82000000000002, "text": " Need to rotate the chairs?"}, {"start": 165.82000000000002, "end": 167.22, "text": " No problem."}, {"start": 167.22, "end": 171.58, "text": " And we are not even finished yet because when we set up the pose correctly,"}, {"start": 171.58, "end": 173.62, "text": " we are not stuck with these images."}, {"start": 173.62, "end": 177.10000000000002, "text": " We can also choose from several different appearances."}, {"start": 177.10000000000002, "end": 183.34, "text": " And all this comes from the fact that this technique was able to learn the intricacies of these objects."}, {"start": 183.34, "end": 184.42000000000002, "text": " Love it."}, {"start": 184.42000000000002, "end": 188.54000000000002, "text": " Now, it is abundantly clear that as we rotate these cars"}, {"start": 188.54000000000002, "end": 191.18, "text": " or change the camera view point for the bedroom,"}, {"start": 191.18, "end": 193.74, "text": " a flickering effect is still present."}, {"start": 193.74, "end": 195.9, "text": " And this is how research works."}, {"start": 195.9, "end": 199.26000000000002, "text": " We try to solve a new problem one step at a time."}, {"start": 199.26, "end": 203.17999999999998, "text": " Then we find flaws in the solution and improve upon that."}, {"start": 203.17999999999998, "end": 206.78, "text": " As a result, we always say two more papers down the line"}, {"start": 206.78, "end": 210.78, "text": " and will likely have smooth and creamy transitions between the images."}, {"start": 210.78, "end": 214.06, "text": " The Lambda Sponsorship Spot is coming in a moment."}, {"start": 214.06, "end": 216.54, "text": " And I don't know if you have noticed at the start,"}, {"start": 216.54, "end": 219.82, "text": " but they were also part of this research project as well."}, {"start": 219.82, "end": 223.89999999999998, "text": " I think that it is as relevant of a sponsor as it gets."}, {"start": 223.89999999999998, "end": 229.18, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 229.18, "end": 231.74, "text": " check out Lambda GPU Cloud."}, {"start": 231.74, "end": 235.18, "text": " I've talked about Lambda's GPU workstations in other videos"}, {"start": 235.18, "end": 239.58, "text": " and I'm happy to tell you that they are offering GPU cloud services as well."}, {"start": 239.58, "end": 246.78, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 246.78, "end": 252.14000000000001, "text": " Lambda's Web-based IDE lets you easily access your instance right in your browser."}, {"start": 252.14000000000001, "end": 256.22, "text": " And finally, hold on to your papers because the Lambda GPU Cloud"}, {"start": 256.22, "end": 260.14000000000004, "text": " costs less than half of AWS and Azure."}, {"start": 260.14000000000004, "end": 263.58000000000004, "text": " Make sure to go to LambdaLabs.com slash papers"}, {"start": 263.58000000000004, "end": 266.94000000000005, "text": " and sign up for their amazing GPU instances today."}, {"start": 266.94000000000005, "end": 269.18, "text": " Thanks for watching and for your generous support"}, {"start": 269.18, "end": 295.58, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=g1sAjtDoItE
Cubify All The Things! 🐄
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "Cubic Stylization" is available here: http://www.dgp.toronto.edu/projects/cubic-stylization/ Erratum: I have misunderstood the "fixing" part. Instead of fixing as in "repairing", it rather fixes regions as in "pinning down" parts of it. (Thank you Liam Appelbe for noting it!) 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zorna Ifehir. I apologize for my voice today, I am trapped in this frail human body and sometimes it filters. But the papers must go on. This is one of those papers where I find that the more time I spend with it, the more I realize how amazing it is. It starts out with an interesting little value proposition that in and of itself would likely not make it to a paper. So, what is this paper about? Well, as you see here, this one is about cubification of 3D geometry. In other words, we take an input shape and it stylizes it to look more like a cube. Okay, that's cute, especially given that there are many, many ways to do this and it's hard to immediately put into words what a good end result would be, you can see a comparison to previous works here. These previous works did not seem to preserve a lot of fine details, but if you look at this new one, you see that this one does that really well. Very nice indeed, but still, when I read this paper at this point I was thinking I'd like a little more. Well, I quickly found out that this work has more up its sleeve. So much more. Let's talk about 7 of these amazing features. For instance, one, we can control the strength of the transformation with this lambda parameter. As you see, the more we increase it, the more heavy-handed the smushing process is going to get. Please remember this part. 2. We can also cubify selectively along different directions or select parts of the object that should be cubified differently. Hmm. Okay. 3. 4. This transformation procedure also takes into consideration the orientations. This means that we can perform it from different angles, which gives us a large selection of possible outputs for the same model. 5. It is fast and works on high-resolution geometry, and you see different settings for the lambda parameter here that is the same parameter as we talked about before, the strength of the transformation. 6. We can also combine many of these features interactively until a desirable shape is found. 7 is about to come in a moment, but to appreciate what that is, we have to look at this. To perform what you have seen here so far, we have to minimize this expression. This first term says ARAP as rigid as possible, which stipulates that whatever we do in terms of smushing, it should preserve the fine local features. The second part is called the regularization term that encourages sparser, more access-aligned solutions, so we don't destroy the entire model during this process. The stronger this term is, the bigger say it has in the final results, which in return become more cubelike. So, how do we do that? Well, of course, with our trusty little lambda parameter. Not only that, but if we look at the appendix, it tells us that we can generalize the second regularization term for many different shapes. So here we are, finally, 7. It doesn't even need to be cubification, we can specify all kinds of polyhedra. Look at those gorgeous results. I love this paper. It is playful. It is elegant, it has utility, and it generalizes well. It doesn't care in the slightest what the current mainstream ideas are and invites us into its own little world. In summary, this will serve all your cubification needs and turns out it might even fix your geometry and more. I would love to see more papers like this. In this series, I try to make people feel how I feel when I read these papers. I hope I have managed this time, but you be the judge. Let me know in the comments. This episode has been supported by Linode. Linode is the world's largest independent cloud computing provider. They offer you virtual servers that make it easy and affordable to host your own app, site, project, or anything else in the cloud. Whether you are a Linodex expert or just starting to tinker with your own code, Linode will be useful for you. A few episodes ago, we played with an implementation of OpenAIS GPT2 where our excited viewers accidentally overloaded the system. With Linode's load balancing technology and instances ranging from shared nanodes, all the way up to dedicated GPUs, you don't have to worry about your project being overloaded. To get $20 of free credit, make sure to head over to linode.com slash papers and sign up today using the promo code Papers20. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zorna Ifehir."}, {"start": 4.72, "end": 10.36, "text": " I apologize for my voice today, I am trapped in this frail human body and sometimes it"}, {"start": 10.36, "end": 11.36, "text": " filters."}, {"start": 11.36, "end": 13.6, "text": " But the papers must go on."}, {"start": 13.6, "end": 18.240000000000002, "text": " This is one of those papers where I find that the more time I spend with it, the more"}, {"start": 18.240000000000002, "end": 21.0, "text": " I realize how amazing it is."}, {"start": 21.0, "end": 25.72, "text": " It starts out with an interesting little value proposition that in and of itself would"}, {"start": 25.72, "end": 28.080000000000002, "text": " likely not make it to a paper."}, {"start": 28.08, "end": 30.599999999999998, "text": " So, what is this paper about?"}, {"start": 30.599999999999998, "end": 36.0, "text": " Well, as you see here, this one is about cubification of 3D geometry."}, {"start": 36.0, "end": 41.839999999999996, "text": " In other words, we take an input shape and it stylizes it to look more like a cube."}, {"start": 41.839999999999996, "end": 46.84, "text": " Okay, that's cute, especially given that there are many, many ways to do this and it's"}, {"start": 46.84, "end": 52.44, "text": " hard to immediately put into words what a good end result would be, you can see a comparison"}, {"start": 52.44, "end": 54.44, "text": " to previous works here."}, {"start": 54.44, "end": 58.68, "text": " These previous works did not seem to preserve a lot of fine details, but if you look at"}, {"start": 58.68, "end": 62.599999999999994, "text": " this new one, you see that this one does that really well."}, {"start": 62.599999999999994, "end": 68.4, "text": " Very nice indeed, but still, when I read this paper at this point I was thinking I'd like"}, {"start": 68.4, "end": 69.4, "text": " a little more."}, {"start": 69.4, "end": 74.6, "text": " Well, I quickly found out that this work has more up its sleeve."}, {"start": 74.6, "end": 75.6, "text": " So much more."}, {"start": 75.6, "end": 79.32, "text": " Let's talk about 7 of these amazing features."}, {"start": 79.32, "end": 84.0, "text": " For instance, one, we can control the strength of the transformation with this lambda"}, {"start": 84.0, "end": 85.0, "text": " parameter."}, {"start": 85.0, "end": 89.88, "text": " As you see, the more we increase it, the more heavy-handed the smushing process is going"}, {"start": 89.88, "end": 90.88, "text": " to get."}, {"start": 90.88, "end": 93.4, "text": " Please remember this part."}, {"start": 93.4, "end": 94.4, "text": " 2."}, {"start": 94.4, "end": 103.96000000000001, "text": " We can also cubify selectively along different directions or select parts of the object that"}, {"start": 103.96000000000001, "end": 106.72, "text": " should be cubified differently."}, {"start": 106.72, "end": 107.72, "text": " Hmm."}, {"start": 107.72, "end": 108.72, "text": " Okay."}, {"start": 108.72, "end": 115.72, "text": " 3."}, {"start": 115.72, "end": 116.72, "text": " 4."}, {"start": 116.72, "end": 121.6, "text": " This transformation procedure also takes into consideration the orientations."}, {"start": 121.6, "end": 126.03999999999999, "text": " This means that we can perform it from different angles, which gives us a large selection of"}, {"start": 126.03999999999999, "end": 129.0, "text": " possible outputs for the same model."}, {"start": 129.0, "end": 130.0, "text": " 5."}, {"start": 130.0, "end": 135.44, "text": " It is fast and works on high-resolution geometry, and you see different settings for the lambda"}, {"start": 135.44, "end": 140.4, "text": " parameter here that is the same parameter as we talked about before, the strength of"}, {"start": 140.4, "end": 142.24, "text": " the transformation."}, {"start": 142.24, "end": 143.76, "text": " 6."}, {"start": 143.76, "end": 150.44, "text": " We can also combine many of these features interactively until a desirable shape is found."}, {"start": 150.44, "end": 156.72, "text": " 7 is about to come in a moment, but to appreciate what that is, we have to look at this."}, {"start": 156.72, "end": 161.6, "text": " To perform what you have seen here so far, we have to minimize this expression."}, {"start": 161.6, "end": 168.28, "text": " This first term says ARAP as rigid as possible, which stipulates that whatever we do in terms"}, {"start": 168.28, "end": 172.44, "text": " of smushing, it should preserve the fine local features."}, {"start": 172.44, "end": 178.4, "text": " The second part is called the regularization term that encourages sparser, more access-aligned"}, {"start": 178.4, "end": 183.04, "text": " solutions, so we don't destroy the entire model during this process."}, {"start": 183.04, "end": 188.56, "text": " The stronger this term is, the bigger say it has in the final results, which in return"}, {"start": 188.56, "end": 190.44, "text": " become more cubelike."}, {"start": 190.44, "end": 192.76, "text": " So, how do we do that?"}, {"start": 192.76, "end": 196.92, "text": " Well, of course, with our trusty little lambda parameter."}, {"start": 196.92, "end": 201.68, "text": " Not only that, but if we look at the appendix, it tells us that we can generalize the second"}, {"start": 201.68, "end": 205.16, "text": " regularization term for many different shapes."}, {"start": 205.16, "end": 208.16, "text": " So here we are, finally, 7."}, {"start": 208.16, "end": 213.68, "text": " It doesn't even need to be cubification, we can specify all kinds of polyhedra."}, {"start": 213.68, "end": 215.76, "text": " Look at those gorgeous results."}, {"start": 215.76, "end": 217.72, "text": " I love this paper."}, {"start": 217.72, "end": 218.96, "text": " It is playful."}, {"start": 218.96, "end": 223.84, "text": " It is elegant, it has utility, and it generalizes well."}, {"start": 223.84, "end": 228.84, "text": " It doesn't care in the slightest what the current mainstream ideas are and invites us into"}, {"start": 228.84, "end": 230.60000000000002, "text": " its own little world."}, {"start": 230.60000000000002, "end": 236.36, "text": " In summary, this will serve all your cubification needs and turns out it might even fix your"}, {"start": 236.36, "end": 238.52, "text": " geometry and more."}, {"start": 238.52, "end": 241.16, "text": " I would love to see more papers like this."}, {"start": 241.16, "end": 246.24, "text": " In this series, I try to make people feel how I feel when I read these papers."}, {"start": 246.24, "end": 250.04000000000002, "text": " I hope I have managed this time, but you be the judge."}, {"start": 250.04000000000002, "end": 251.48000000000002, "text": " Let me know in the comments."}, {"start": 251.48000000000002, "end": 253.72, "text": " This episode has been supported by Linode."}, {"start": 253.72, "end": 257.88, "text": " Linode is the world's largest independent cloud computing provider."}, {"start": 257.88, "end": 264.2, "text": " They offer you virtual servers that make it easy and affordable to host your own app, site,"}, {"start": 264.2, "end": 266.96000000000004, "text": " project, or anything else in the cloud."}, {"start": 266.96000000000004, "end": 271.32, "text": " Whether you are a Linodex expert or just starting to tinker with your own code, Linode"}, {"start": 271.32, "end": 272.72, "text": " will be useful for you."}, {"start": 272.72, "end": 280.16, "text": " A few episodes ago, we played with an implementation of OpenAIS GPT2 where our excited viewers accidentally"}, {"start": 280.16, "end": 281.88000000000005, "text": " overloaded the system."}, {"start": 281.88000000000005, "end": 287.28000000000003, "text": " With Linode's load balancing technology and instances ranging from shared nanodes,"}, {"start": 287.28000000000003, "end": 292.40000000000003, "text": " all the way up to dedicated GPUs, you don't have to worry about your project being overloaded."}, {"start": 292.40000000000003, "end": 298.48, "text": " To get $20 of free credit, make sure to head over to linode.com slash papers and sign up"}, {"start": 298.48, "end": 301.96000000000004, "text": " today using the promo code Papers20."}, {"start": 301.96, "end": 306.64, "text": " Our thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 306.64, "end": 334.64, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=atcKO15YVD8
Ubisoft's AI Learns To Compute Game Physics In Microseconds! ⚛️
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Their blog post and their CodeSearchNet system are available here: https://www.wandb.com/articles/codesearchnet https://app.wandb.ai/github/CodeSearchNet/benchmark 📝 The paper "Subspace Neural Physics: Fast Data-Driven Interactive Simulation" is available here: https://montreal.ubisoft.com/fr/deep-cloth-paper/ http://theorangeduck.com/page/subspace-neural-physics-fast-data-driven-interactive-simulation 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #gamedev
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir in almost any kind of real-time computer games where different objects interact with each other, having some sort of physics engine is a requirement. Flag's waving in the wind, stroking bunnies with circular objects are among these cases, and of course not all heroes wear capes, but the ones that do require the presence of such a physics engine. However, a full physical simulation of these interactions is often not possible because it is orders of magnitude slower than what we are looking for in real-time applications. Now, hold on to your papers because this project proposes a new learning-based method that can speed up the standard physical simulations and make them 300 to 5000 times faster. And then we can give it all the positions, forces and other information and it will be able to tell us the outcome and do all this faster than real-time. Since this is a neural network-based project, our seasoned Fellow Scholars know that will need many hours of simulation data to train on. Fortunately, this information can be produced with one of those more accurate but slower methods. We can wait orbit rarely long for a full physical simulation for this training set because it is only needed once for the training. One of the key decisions in this project is that it also supports interaction with objects and we can even specify external forces like wind direction and speed controls. In some papers, the results are difficult to evaluate. For instance, when we produce any kind of deepfake, we need to call in people and create a user study where we measure how often do people believe forged videos to be real. The process has many pitfalls like choosing a good distribution of people, asking the right questions and so on. Another great part of the design of this project is that evaluating this is a breeze. We can just give it a novel situation, let it guess the result, then simulate the same thing with a full physical simulator and compare the two against each other. And they are really close. But wait, do you see what I see? If you are worried about how computationally intensive the neural network-based solution is, don't be. It only takes a few megabytes of memory, which is nothing and it runs in the order of microseconds, which is also nothing. So much so that if you look here, you see that the full simulation can be done at two frames per second while this new solution produces thousands and thousands of frames per second. I think it is justified to say that this thing costs absolutely nothing. I think I will take this one. Thank you very much. We can even scale up the number of interactions as you see here and even in this case, it can produce more than a hundred frames per second. It is incredible. We can also up or downscale the quality of the results and get different trade-offs. If a core simulation looks good enough for our applications, we can even get up to tens of thousands of frames per second. That costs nothing even compared to the previous nothing. The key part of the solution is that it compresses the simulated data through a method called principle component analysis and the training takes place on this compressed representation which only needs to be unpacked when something new is happening in the game which leads to a significant speed up and it is also very gentle with memory use. And working with this compressed representation is the reason why you see this method referred to as subspace neurophysics. However, as always, some limitations apply. For instance, it can kind of extrapolate beyond the examples that it has been trained on but as you see here, if the training data is lacking a given kind of application, don't expect miracles. Yet. If you have a look at the paper, you'll actually find the user study but it is about the usefulness of the individual components of the system. Make sure to check it out in the video description. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. Have a look at this project they launched to make computer code semantically searchable where, for example, we could ask, show me the best model on this dataset with the fewest parameters and get a piece of code that does exactly that. Absolutely amazing. Make sure to visit them through www.wndb.com slash papers or just click the link in the video description and you can get a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.54, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir in almost any kind"}, {"start": 5.54, "end": 9.82, "text": " of real-time computer games where different objects interact with each other, having"}, {"start": 9.82, "end": 13.14, "text": " some sort of physics engine is a requirement."}, {"start": 13.14, "end": 18.94, "text": " Flag's waving in the wind, stroking bunnies with circular objects are among these cases,"}, {"start": 18.94, "end": 24.54, "text": " and of course not all heroes wear capes, but the ones that do require the presence of such"}, {"start": 24.54, "end": 25.78, "text": " a physics engine."}, {"start": 25.78, "end": 31.380000000000003, "text": " However, a full physical simulation of these interactions is often not possible because"}, {"start": 31.380000000000003, "end": 37.1, "text": " it is orders of magnitude slower than what we are looking for in real-time applications."}, {"start": 37.1, "end": 42.86, "text": " Now, hold on to your papers because this project proposes a new learning-based method that"}, {"start": 42.86, "end": 50.38, "text": " can speed up the standard physical simulations and make them 300 to 5000 times faster."}, {"start": 50.38, "end": 55.74, "text": " And then we can give it all the positions, forces and other information and it will be"}, {"start": 55.74, "end": 61.620000000000005, "text": " able to tell us the outcome and do all this faster than real-time."}, {"start": 61.620000000000005, "end": 66.06, "text": " Since this is a neural network-based project, our seasoned Fellow Scholars know that will"}, {"start": 66.06, "end": 69.62, "text": " need many hours of simulation data to train on."}, {"start": 69.62, "end": 75.78, "text": " Fortunately, this information can be produced with one of those more accurate but slower methods."}, {"start": 75.78, "end": 80.62, "text": " We can wait orbit rarely long for a full physical simulation for this training set because"}, {"start": 80.62, "end": 83.34, "text": " it is only needed once for the training."}, {"start": 83.34, "end": 88.5, "text": " One of the key decisions in this project is that it also supports interaction with objects"}, {"start": 88.5, "end": 94.58, "text": " and we can even specify external forces like wind direction and speed controls."}, {"start": 94.58, "end": 97.98, "text": " In some papers, the results are difficult to evaluate."}, {"start": 97.98, "end": 102.82, "text": " For instance, when we produce any kind of deepfake, we need to call in people and create"}, {"start": 102.82, "end": 108.89999999999999, "text": " a user study where we measure how often do people believe forged videos to be real."}, {"start": 108.89999999999999, "end": 113.86, "text": " The process has many pitfalls like choosing a good distribution of people, asking the right"}, {"start": 113.86, "end": 116.13999999999999, "text": " questions and so on."}, {"start": 116.13999999999999, "end": 121.22, "text": " Another great part of the design of this project is that evaluating this is a breeze."}, {"start": 121.22, "end": 127.1, "text": " We can just give it a novel situation, let it guess the result, then simulate the same"}, {"start": 127.1, "end": 133.14, "text": " thing with a full physical simulator and compare the two against each other."}, {"start": 133.14, "end": 135.54, "text": " And they are really close."}, {"start": 135.54, "end": 138.34, "text": " But wait, do you see what I see?"}, {"start": 138.34, "end": 142.62, "text": " If you are worried about how computationally intensive the neural network-based solution"}, {"start": 142.62, "end": 144.45999999999998, "text": " is, don't be."}, {"start": 144.45999999999998, "end": 151.26, "text": " It only takes a few megabytes of memory, which is nothing and it runs in the order of microseconds,"}, {"start": 151.26, "end": 152.9, "text": " which is also nothing."}, {"start": 152.9, "end": 158.62, "text": " So much so that if you look here, you see that the full simulation can be done at two frames"}, {"start": 158.62, "end": 164.58, "text": " per second while this new solution produces thousands and thousands of frames per second."}, {"start": 164.58, "end": 169.62, "text": " I think it is justified to say that this thing costs absolutely nothing."}, {"start": 169.62, "end": 171.58, "text": " I think I will take this one."}, {"start": 171.58, "end": 172.74, "text": " Thank you very much."}, {"start": 172.74, "end": 178.3, "text": " We can even scale up the number of interactions as you see here and even in this case, it"}, {"start": 178.3, "end": 181.9, "text": " can produce more than a hundred frames per second."}, {"start": 181.9, "end": 183.14000000000001, "text": " It is incredible."}, {"start": 183.14000000000001, "end": 188.70000000000002, "text": " We can also up or downscale the quality of the results and get different trade-offs."}, {"start": 188.70000000000002, "end": 193.5, "text": " If a core simulation looks good enough for our applications, we can even get up to tens"}, {"start": 193.5, "end": 196.14000000000001, "text": " of thousands of frames per second."}, {"start": 196.14000000000001, "end": 199.98000000000002, "text": " That costs nothing even compared to the previous nothing."}, {"start": 199.98000000000002, "end": 204.94, "text": " The key part of the solution is that it compresses the simulated data through a method called"}, {"start": 204.94, "end": 210.26, "text": " principle component analysis and the training takes place on this compressed representation"}, {"start": 210.26, "end": 215.5, "text": " which only needs to be unpacked when something new is happening in the game which leads to"}, {"start": 215.5, "end": 220.45999999999998, "text": " a significant speed up and it is also very gentle with memory use."}, {"start": 220.45999999999998, "end": 225.42, "text": " And working with this compressed representation is the reason why you see this method referred"}, {"start": 225.42, "end": 227.78, "text": " to as subspace neurophysics."}, {"start": 227.78, "end": 231.42, "text": " However, as always, some limitations apply."}, {"start": 231.42, "end": 236.22, "text": " For instance, it can kind of extrapolate beyond the examples that it has been trained"}, {"start": 236.22, "end": 242.42, "text": " on but as you see here, if the training data is lacking a given kind of application, don't"}, {"start": 242.42, "end": 243.98, "text": " expect miracles."}, {"start": 243.98, "end": 245.26, "text": " Yet."}, {"start": 245.26, "end": 249.38, "text": " If you have a look at the paper, you'll actually find the user study but it is about"}, {"start": 249.38, "end": 252.94, "text": " the usefulness of the individual components of the system."}, {"start": 252.94, "end": 255.62, "text": " Make sure to check it out in the video description."}, {"start": 255.62, "end": 259.02, "text": " This episode has been supported by weights and biases."}, {"start": 259.02, "end": 263.82, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 263.82, "end": 268.94, "text": " It can save you a ton of time and money in these projects and is being used by OpenAI,"}, {"start": 268.94, "end": 272.14, "text": " Toyota Research, Stanford and Berkeley."}, {"start": 272.14, "end": 276.7, "text": " Have a look at this project they launched to make computer code semantically searchable"}, {"start": 276.7, "end": 282.42, "text": " where, for example, we could ask, show me the best model on this dataset with the fewest"}, {"start": 282.42, "end": 286.98, "text": " parameters and get a piece of code that does exactly that."}, {"start": 286.98, "end": 288.5, "text": " Absolutely amazing."}, {"start": 288.5, "end": 297.18, "text": " Make sure to visit them through www.wndb.com slash papers or just click the link in the video"}, {"start": 297.18, "end": 300.82, "text": " description and you can get a free demo today."}, {"start": 300.82, "end": 304.7, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 304.7, "end": 327.62, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=nSHU-4Yt4eQ
AIs Are Getting Too Smart - Time For A New "IQ Test” 🎓
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems" is available here: https://super.gluebenchmark.com https://arxiv.org/abs/1905.00537 Our earlier video, "DeepMind's AI Takes An IQ Test": https://www.youtube.com/watch?v=eSaShQbUJTQ Our earlier video on the OpenAI Retro Contest is available here: https://www.youtube.com/watch?v=2FHHuRTkr_Y 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
This episode has been supported by Lambda. Dear Fellow Scholars, this is two-minute papers with Karojona Ifaher. In a world where learning based algorithms are rapidly becoming more capable, I increasingly find myself asking the question, so how smart are these algorithms really? I am clearly not alone with this. To be able to answer this question, a set of tests were proposed and many of these tests shared one important design decision. They are very difficult to solve for someone without generalized knowledge. In an earlier episode, we talked about DeepMind's paper where they created a bunch of randomized, mind-bending, or in the case of an AI, maybe silicon-bending questions that looked quite a bit like a nasty, nasty IQ test. And even in the presence of additional distractions, their AI did extremely well. I noted that on this test, finding the correct solution around 60% of the time would be quite respectable for a human where their algorithms succeeded over 62% of the time and upon removing the annoying distractions, this success rate skyrocketed to 78%. Wow! More specialized tests have also been developed. For instance, scientists at DeepMind also released a modular math test with over 2 million questions in which their AI did extremely well at tasks like interpolation, rounding decimals, integers, whereas they were not too accurate at detecting primality and factorization. Furthermore, a little more than a year ago, the glue benchmark appeared that was designed to test the natural language understanding capabilities of these AI's. When benchmarking the state of the art learning algorithms, they found that they were approximately 80% as good as the fellow non-expert human beings. That is remarkable. Given the difficulty of the test, they were likely not expecting human level performance, which you see marked with the black horizontal line which was surpassed within less than a year. So what do we do in this case? Well, as always, of course, design an even harder test. In comes super glue. The paper we are looking at today, which meant to provide an even harder challenge for these learning algorithms. Have a look at these example questions here. For instance, this time around, reusing general background knowledge gets more emphasis in the questions. As a result, the AI has to be able to learn and reason with more finesse to successfully address these questions. Here you see a bunch of examples, and you can see that these are anything but trivial little tests for a baby AI. Not all, but some of these are calibrated for humans at around college level education. So, let's have a look at how the current state of the art AI is fared in this one. Well, not as good as humans, which is good news because that was the main objective. However, they still did remarkably well. For instance, the bull-cue package contains a set of yes and no questions. And these, the AI's are reasonably close to human performance, but on multi-RC, the multi-sentence reading comprehension package, they still do okay, but humans outperform them by quite a bit. Note that you see two numbers for this test. The reason for this is that there are multiple test sets for this package. Note that in the second one, even humans seem to fail almost half the time, so I can only imagine the revelation will have a couple more papers down the line. I am very excited to see that, and if you are too, make sure to subscribe and hit the bell icon to never miss future episodes. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Asia. Make sure to go to LambdaLabs.com, slash papers, and sign up for one of their amazing GPU instances today. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 3.04, "text": " This episode has been supported by Lambda."}, {"start": 3.04, "end": 7.16, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifaher."}, {"start": 7.16, "end": 12.24, "text": " In a world where learning based algorithms are rapidly becoming more capable, I increasingly"}, {"start": 12.24, "end": 17.8, "text": " find myself asking the question, so how smart are these algorithms really?"}, {"start": 17.8, "end": 20.14, "text": " I am clearly not alone with this."}, {"start": 20.14, "end": 24.96, "text": " To be able to answer this question, a set of tests were proposed and many of these tests"}, {"start": 24.96, "end": 27.76, "text": " shared one important design decision."}, {"start": 27.76, "end": 31.880000000000003, "text": " They are very difficult to solve for someone without generalized knowledge."}, {"start": 31.880000000000003, "end": 37.400000000000006, "text": " In an earlier episode, we talked about DeepMind's paper where they created a bunch of randomized,"}, {"start": 37.400000000000006, "end": 43.28, "text": " mind-bending, or in the case of an AI, maybe silicon-bending questions that looked quite a bit"}, {"start": 43.28, "end": 46.52, "text": " like a nasty, nasty IQ test."}, {"start": 46.52, "end": 51.56, "text": " And even in the presence of additional distractions, their AI did extremely well."}, {"start": 51.56, "end": 57.24, "text": " I noted that on this test, finding the correct solution around 60% of the time would be quite"}, {"start": 57.24, "end": 64.52, "text": " respectable for a human where their algorithms succeeded over 62% of the time and upon removing"}, {"start": 64.52, "end": 70.04, "text": " the annoying distractions, this success rate skyrocketed to 78%."}, {"start": 70.04, "end": 71.72, "text": " Wow!"}, {"start": 71.72, "end": 74.4, "text": " More specialized tests have also been developed."}, {"start": 74.4, "end": 80.56, "text": " For instance, scientists at DeepMind also released a modular math test with over 2 million"}, {"start": 80.56, "end": 88.08, "text": " questions in which their AI did extremely well at tasks like interpolation, rounding decimals,"}, {"start": 88.08, "end": 93.96000000000001, "text": " integers, whereas they were not too accurate at detecting primality and factorization."}, {"start": 93.96000000000001, "end": 99.28, "text": " Furthermore, a little more than a year ago, the glue benchmark appeared that was designed"}, {"start": 99.28, "end": 104.24000000000001, "text": " to test the natural language understanding capabilities of these AI's."}, {"start": 104.24000000000001, "end": 109.68, "text": " When benchmarking the state of the art learning algorithms, they found that they were approximately"}, {"start": 109.68, "end": 114.48, "text": " 80% as good as the fellow non-expert human beings."}, {"start": 114.48, "end": 116.2, "text": " That is remarkable."}, {"start": 116.2, "end": 120.84, "text": " Given the difficulty of the test, they were likely not expecting human level performance,"}, {"start": 120.84, "end": 125.80000000000001, "text": " which you see marked with the black horizontal line which was surpassed within less than a"}, {"start": 125.80000000000001, "end": 126.80000000000001, "text": " year."}, {"start": 126.80000000000001, "end": 129.24, "text": " So what do we do in this case?"}, {"start": 129.24, "end": 134.4, "text": " Well, as always, of course, design an even harder test."}, {"start": 134.4, "end": 135.96, "text": " In comes super glue."}, {"start": 135.96, "end": 140.68, "text": " The paper we are looking at today, which meant to provide an even harder challenge for these"}, {"start": 140.68, "end": 142.28, "text": " learning algorithms."}, {"start": 142.28, "end": 144.96, "text": " Have a look at these example questions here."}, {"start": 144.96, "end": 150.16, "text": " For instance, this time around, reusing general background knowledge gets more emphasis in"}, {"start": 150.16, "end": 151.24, "text": " the questions."}, {"start": 151.24, "end": 157.28, "text": " As a result, the AI has to be able to learn and reason with more finesse to successfully"}, {"start": 157.28, "end": 160.96, "text": " address these questions."}, {"start": 160.96, "end": 165.64000000000001, "text": " Here you see a bunch of examples, and you can see that these are anything but trivial"}, {"start": 165.64, "end": 168.51999999999998, "text": " little tests for a baby AI."}, {"start": 168.51999999999998, "end": 173.76, "text": " Not all, but some of these are calibrated for humans at around college level education."}, {"start": 173.76, "end": 179.39999999999998, "text": " So, let's have a look at how the current state of the art AI is fared in this one."}, {"start": 179.39999999999998, "end": 184.95999999999998, "text": " Well, not as good as humans, which is good news because that was the main objective."}, {"start": 184.95999999999998, "end": 188.2, "text": " However, they still did remarkably well."}, {"start": 188.2, "end": 192.95999999999998, "text": " For instance, the bull-cue package contains a set of yes and no questions."}, {"start": 192.96, "end": 199.28, "text": " And these, the AI's are reasonably close to human performance, but on multi-RC, the"}, {"start": 199.28, "end": 205.20000000000002, "text": " multi-sentence reading comprehension package, they still do okay, but humans outperform them"}, {"start": 205.20000000000002, "end": 206.92000000000002, "text": " by quite a bit."}, {"start": 206.92000000000002, "end": 209.28, "text": " Note that you see two numbers for this test."}, {"start": 209.28, "end": 213.8, "text": " The reason for this is that there are multiple test sets for this package."}, {"start": 213.8, "end": 219.20000000000002, "text": " Note that in the second one, even humans seem to fail almost half the time, so I can only"}, {"start": 219.2, "end": 223.11999999999998, "text": " imagine the revelation will have a couple more papers down the line."}, {"start": 223.11999999999998, "end": 227.92, "text": " I am very excited to see that, and if you are too, make sure to subscribe and hit the"}, {"start": 227.92, "end": 230.72, "text": " bell icon to never miss future episodes."}, {"start": 230.72, "end": 236.04, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 236.04, "end": 238.72, "text": " check out Lambda GPU Cloud."}, {"start": 238.72, "end": 243.56, "text": " I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you"}, {"start": 243.56, "end": 246.83999999999997, "text": " that they are offering GPU Cloud services as well."}, {"start": 246.84, "end": 254.36, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 254.36, "end": 259.68, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 259.68, "end": 265.84000000000003, "text": " And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 265.84000000000003, "end": 268.08, "text": " AWS and Asia."}, {"start": 268.08, "end": 273.56, "text": " Make sure to go to LambdaLabs.com, slash papers, and sign up for one of their amazing GPU"}, {"start": 273.56, "end": 274.96, "text": " instances today."}, {"start": 274.96, "end": 278.91999999999996, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Lu56xVlZ40M
OpenAI Plays Hide and Seek…and Breaks The Game! 🤖
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers ❤️ Their blog post is available here: https://www.wandb.com/articles/better-paths-through-idea-space 📝 The paper "Emergent Tool Use from Multi-Agent Interaction" is available here: https://openai.com/blog/emergent-tool-use/ ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu 00:00 Intro 00:44 Start - Pandemonium! 01:06 A little learning 01:33 But then - something happened! 02:08 They learned what?! 02:32 It gets even weirder 03:16 Amazing teamwork 04:02 More interesting behaviors 04:33 Extensions 05:02 More stuff from the paper Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/twominutepapers Web: https://cg.tuwien.ac.at/~zsolnai/ #OpenAI
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. In this project, open AI built a hide and seek game for their AI agents to play. While we look at the exact rules here, I will note that the goal of the project was to pit two AI teams against each other and hopefully see some interesting emergent behaviors. And boy, did they do some crazy stuff. The coolest part is that the two teams compete against each other and whenever one team discovers a new strategy, the other one has to adapt. Kind of like an arms race situation and it also resembles generative adversarial networks a little. And the results are magnificent, amusing, weird. You'll see in a moment. These agents learn from previous experiences and to the surprise of no one for the first few million rounds we start out with pandemonium. Everyone just running around aimlessly. Without proper strategy and semi-rendal movements, the seekers are favored and hence win the majority of the games. Nothing to see here. Then over time, the hiders learn to lock out the seekers by blocking the doors off with these boxes and started winning consistently. I think the coolest part about this is that the map was deliberately designed by the open AI scientists in a way that the hiders can only succeed through collaboration. They cannot win alone and hence they are forced to learn to work together, which they did quite well. But then something happened. Did you notice this pointy door-stop-shaped object? Are you thinking what I am thinking? Well, probably, and not only that, but about 10 million rounds later, the AI also discovered that it can be pushed near a wall and be used as a ramp and tadaa-gadam. The seekers started winning more again. So the ball is now back on the court of the hiders. Can you defend this? If so, how? Well, these resourceful little critters learned that since there is a little time at the start of the game when the seekers are frozen, apparently during this time they cannot see them so why not just sneak out, steal the ramp and lock it away from them. Absolutely incredible. Look at those happy eyes as they are carrying that ramp. And you think it all ends here? No, no, no, not even close. It gets weirder, much weirder. When playing a different map, the seeker has noticed that it can use a ramp to climb on the top of a box and this happens. Do you think couch surfing is cool? Give me a break. This is box surfing. And the scientists were quite surprised by this move as this was one of the first cases where the seeker AI seems to have broken the game. What happens here is that the physics system is coded in a way that they are able to move around by exerting force on themselves, but there is no additional check whether they are on the floor or not because who in their right mind would think about that. As a result, something that shouldn't ever happen does happen here. And we are still not done yet. This paper just keeps on giving. A few hundred million rounds later, the hiders learned to separate all the ramps from the boxes. Dear Fellow Scholars, this is proper box surfing defense. Again, lock down the remaining tools and build a shelter. Note how well rehearsed and executed this strategy is. There is not a second of time left until the seeker's take off. I also love this cheeky move where they set up the shelter right next to the seeker's and I almost feel like they are saying, yeah, see this here? There is not a single thing you can do about it. In a few isolated cases, other interesting behaviors also emerged. For instance, the hiders learned to exploit the physics system and just chuck the ramp away. After that, the seekers go, what? What just happened? But don't despair and at this point, I would also recommend that you hold onto your papers because there was also a crazy case where a seeker also learned to abuse a similar physics issue and launch itself exactly onto the top of the hiders. Man, what a paper. This system can be extended and modded for many other tasks too, so expect to see more of these fun experiments in the future. We get to do this for a living and we are even being paid for this. I can't believe it. In this series, my mission is to showcase beautiful works that light a fire in people. And this is, no doubt, one of those works. Great idea, interesting, unexpected results, crisp presentation, bravo OpenAI. Love it. So did you enjoy this? What do you think? Make sure to leave a comment below. Also, if you look at the paper, it contains comparisons to an earlier work we covered about intrinsic motivation, shows how to implement circular convolutions for the agents to detect their environment around them and more. This episode has been supported by weights and biases. weights and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. In this blog post, they show you how to use their system to find clues and steer your research into more promising areas. Make sure to visit them through WANDDB.com slash papers, W-A-N-D-B.com slash papers, or just click the link in the video description and sign up for a freedom of today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 10.0, "text": " In this project, open AI built a hide and seek game for their AI agents to play."}, {"start": 10.0, "end": 14.24, "text": " While we look at the exact rules here, I will note that the goal of the project was to"}, {"start": 14.24, "end": 21.400000000000002, "text": " pit two AI teams against each other and hopefully see some interesting emergent behaviors."}, {"start": 21.400000000000002, "end": 24.92, "text": " And boy, did they do some crazy stuff."}, {"start": 24.92, "end": 30.360000000000003, "text": " The coolest part is that the two teams compete against each other and whenever one team discovers"}, {"start": 30.360000000000003, "end": 33.92, "text": " a new strategy, the other one has to adapt."}, {"start": 33.92, "end": 38.72, "text": " Kind of like an arms race situation and it also resembles generative adversarial networks"}, {"start": 38.72, "end": 39.72, "text": " a little."}, {"start": 39.72, "end": 44.08, "text": " And the results are magnificent, amusing, weird."}, {"start": 44.08, "end": 45.68000000000001, "text": " You'll see in a moment."}, {"start": 45.68000000000001, "end": 51.0, "text": " These agents learn from previous experiences and to the surprise of no one for the first"}, {"start": 51.0, "end": 55.72, "text": " few million rounds we start out with pandemonium."}, {"start": 55.72, "end": 58.52, "text": " Everyone just running around aimlessly."}, {"start": 58.52, "end": 63.84, "text": " Without proper strategy and semi-rendal movements, the seekers are favored and hence win the"}, {"start": 63.84, "end": 65.76, "text": " majority of the games."}, {"start": 65.76, "end": 67.36, "text": " Nothing to see here."}, {"start": 67.36, "end": 73.36, "text": " Then over time, the hiders learn to lock out the seekers by blocking the doors off with"}, {"start": 73.36, "end": 76.84, "text": " these boxes and started winning consistently."}, {"start": 76.84, "end": 81.36, "text": " I think the coolest part about this is that the map was deliberately designed by the"}, {"start": 81.36, "end": 87.52000000000001, "text": " open AI scientists in a way that the hiders can only succeed through collaboration."}, {"start": 87.52000000000001, "end": 92.92, "text": " They cannot win alone and hence they are forced to learn to work together, which they did"}, {"start": 92.92, "end": 94.28, "text": " quite well."}, {"start": 94.28, "end": 96.56, "text": " But then something happened."}, {"start": 96.56, "end": 99.92, "text": " Did you notice this pointy door-stop-shaped object?"}, {"start": 99.92, "end": 102.08000000000001, "text": " Are you thinking what I am thinking?"}, {"start": 102.08, "end": 109.36, "text": " Well, probably, and not only that, but about 10 million rounds later, the AI also discovered"}, {"start": 109.36, "end": 116.16, "text": " that it can be pushed near a wall and be used as a ramp and tadaa-gadam."}, {"start": 116.16, "end": 118.68, "text": " The seekers started winning more again."}, {"start": 118.68, "end": 122.72, "text": " So the ball is now back on the court of the hiders."}, {"start": 122.72, "end": 124.16, "text": " Can you defend this?"}, {"start": 124.16, "end": 125.64, "text": " If so, how?"}, {"start": 125.64, "end": 131.4, "text": " Well, these resourceful little critters learned that since there is a little time at the"}, {"start": 131.4, "end": 137.52, "text": " start of the game when the seekers are frozen, apparently during this time they cannot see"}, {"start": 137.52, "end": 143.12, "text": " them so why not just sneak out, steal the ramp and lock it away from them."}, {"start": 143.12, "end": 144.88, "text": " Absolutely incredible."}, {"start": 144.88, "end": 148.6, "text": " Look at those happy eyes as they are carrying that ramp."}, {"start": 148.6, "end": 150.48000000000002, "text": " And you think it all ends here?"}, {"start": 150.48000000000002, "end": 153.16, "text": " No, no, no, not even close."}, {"start": 153.16, "end": 155.92000000000002, "text": " It gets weirder, much weirder."}, {"start": 155.92000000000002, "end": 160.48000000000002, "text": " When playing a different map, the seeker has noticed that it can use a ramp to climb"}, {"start": 160.48, "end": 164.95999999999998, "text": " on the top of a box and this happens."}, {"start": 164.95999999999998, "end": 167.44, "text": " Do you think couch surfing is cool?"}, {"start": 167.44, "end": 168.44, "text": " Give me a break."}, {"start": 168.44, "end": 170.48, "text": " This is box surfing."}, {"start": 170.48, "end": 175.44, "text": " And the scientists were quite surprised by this move as this was one of the first cases"}, {"start": 175.44, "end": 178.92, "text": " where the seeker AI seems to have broken the game."}, {"start": 178.92, "end": 183.48, "text": " What happens here is that the physics system is coded in a way that they are able to move"}, {"start": 183.48, "end": 188.92, "text": " around by exerting force on themselves, but there is no additional check whether they"}, {"start": 188.92, "end": 193.67999999999998, "text": " are on the floor or not because who in their right mind would think about that."}, {"start": 193.67999999999998, "end": 198.56, "text": " As a result, something that shouldn't ever happen does happen here."}, {"start": 198.56, "end": 200.48, "text": " And we are still not done yet."}, {"start": 200.48, "end": 202.56, "text": " This paper just keeps on giving."}, {"start": 202.56, "end": 208.16, "text": " A few hundred million rounds later, the hiders learned to separate all the ramps from the"}, {"start": 208.16, "end": 209.16, "text": " boxes."}, {"start": 209.16, "end": 213.92, "text": " Dear Fellow Scholars, this is proper box surfing defense."}, {"start": 213.92, "end": 219.67999999999998, "text": " Again, lock down the remaining tools and build a shelter."}, {"start": 219.67999999999998, "end": 223.6, "text": " Note how well rehearsed and executed this strategy is."}, {"start": 223.6, "end": 227.67999999999998, "text": " There is not a second of time left until the seeker's take off."}, {"start": 227.67999999999998, "end": 232.88, "text": " I also love this cheeky move where they set up the shelter right next to the seeker's"}, {"start": 232.88, "end": 237.27999999999997, "text": " and I almost feel like they are saying, yeah, see this here?"}, {"start": 237.27999999999997, "end": 240.23999999999998, "text": " There is not a single thing you can do about it."}, {"start": 240.24, "end": 244.76000000000002, "text": " In a few isolated cases, other interesting behaviors also emerged."}, {"start": 244.76000000000002, "end": 250.24, "text": " For instance, the hiders learned to exploit the physics system and just chuck the ramp"}, {"start": 250.24, "end": 251.24, "text": " away."}, {"start": 251.24, "end": 254.4, "text": " After that, the seekers go, what?"}, {"start": 254.4, "end": 255.8, "text": " What just happened?"}, {"start": 255.8, "end": 260.72, "text": " But don't despair and at this point, I would also recommend that you hold onto your papers"}, {"start": 260.72, "end": 266.48, "text": " because there was also a crazy case where a seeker also learned to abuse a similar physics"}, {"start": 266.48, "end": 271.76, "text": " issue and launch itself exactly onto the top of the hiders."}, {"start": 271.76, "end": 274.68, "text": " Man, what a paper."}, {"start": 274.68, "end": 279.6, "text": " This system can be extended and modded for many other tasks too, so expect to see more"}, {"start": 279.6, "end": 281.96000000000004, "text": " of these fun experiments in the future."}, {"start": 281.96000000000004, "end": 285.64000000000004, "text": " We get to do this for a living and we are even being paid for this."}, {"start": 285.64000000000004, "end": 287.44, "text": " I can't believe it."}, {"start": 287.44, "end": 293.32, "text": " In this series, my mission is to showcase beautiful works that light a fire in people."}, {"start": 293.32, "end": 296.44, "text": " And this is, no doubt, one of those works."}, {"start": 296.44, "end": 303.16, "text": " Great idea, interesting, unexpected results, crisp presentation, bravo OpenAI."}, {"start": 303.16, "end": 304.16, "text": " Love it."}, {"start": 304.16, "end": 306.0, "text": " So did you enjoy this?"}, {"start": 306.0, "end": 307.0, "text": " What do you think?"}, {"start": 307.0, "end": 308.76, "text": " Make sure to leave a comment below."}, {"start": 308.76, "end": 313.72, "text": " Also, if you look at the paper, it contains comparisons to an earlier work we covered"}, {"start": 313.72, "end": 319.72, "text": " about intrinsic motivation, shows how to implement circular convolutions for the agents to detect"}, {"start": 319.72, "end": 322.8, "text": " their environment around them and more."}, {"start": 322.8, "end": 326.12, "text": " This episode has been supported by weights and biases."}, {"start": 326.12, "end": 330.72, "text": " weights and biases provides tools to track your experiments in your deep learning projects."}, {"start": 330.72, "end": 337.0, "text": " It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota"}, {"start": 337.0, "end": 339.76, "text": " Research, Stanford and Berkeley."}, {"start": 339.76, "end": 344.64, "text": " In this blog post, they show you how to use their system to find clues and steer your"}, {"start": 344.64, "end": 347.28000000000003, "text": " research into more promising areas."}, {"start": 347.28000000000003, "end": 354.68, "text": " Make sure to visit them through WANDDB.com slash papers, W-A-N-D-B.com slash papers, or"}, {"start": 354.68, "end": 359.28000000000003, "text": " just click the link in the video description and sign up for a freedom of today."}, {"start": 359.28000000000003, "end": 363.2, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 363.2, "end": 393.15999999999997, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=882O_7hsAms
AI Learns Human Movement From Unorganized Data 🏃‍♀️
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Learning Predict-and-Simulate Policies From Unorganized Human Motion Data" is available here: http://mrl.snu.ac.kr/publications/ProjectICC/ICC.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
This episode has been supported by Lambda. Dear Fellow Scholars, this is two-minute papers with Karo Ejorna Ifehir. Last year, an amazing neural network-based technique appeared that was able to look at a bunch of unlabeled motion data and learned to weave them together to control the motion of quadrupeds, like this wolf here. It was able to successfully address the shortcomings of previous works. For instance, the weird sliding motions have been eliminated, and it was also capable of following some predefined trajectories. This new paper continues research in this direction by proposing a technique that is also capable of interacting with its environment or other characters. For instance, they can punch each other, and after the punch, they can recover from undesirable positions and more. The problem formulation is as follows. It is given the current state of the character and the goal, and you see here with blue how it predicts the motion to continue. It understands that we have to walk towards the goal that we are likely to fall when hit by a ball, and it knows that then we have to get up and continue our journey and eventually reach our goal. Some amazing life advice from the AI right there. The goal here is also to learn something meaningful from lots of barely labeled human motion data. Barely labeled means that a bunch of videos are given almost as is without additional information on what movements are being performed in these videos. If we had labels for all this data that you see here, it would say that this sequence shows a jump, and these ones are for running. However, the labeling process takes a ton of time and effort, so if we can get away without it, that's glorious, but in return, with this, we create an additional burden that the learning algorithm has to shoulder. Unfortunately, the problem gets even worse. As you see here, the number of frames contained in the original dataset is very scarce. To alleviate this, the authors decided to augment this dataset, which means trying to combine parts of this data to squeeze out as much information as possible. To see some examples here, how this motion data can be combined from many small segments, and in the paper, they show that the augmentation helps us create even up to 10 to 30 times more training data for the neural networks. As a result of this augmented dataset, it can learn to perform zombie, gorilla movements, chicken hopping, even dribbling with a basketball, you name it. But even more, we can give the AI high level commands interactively, and it will try to weave the motions together appropriately. They can also punch each other. Ow! And all this was learned from a bunch of unorganized data. What a time to be alive! If you are a researcher or a startup looking for cheap GPU compute to run these algorithms, check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser. And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com, slash papers, and sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 3.04, "text": " This episode has been supported by Lambda."}, {"start": 3.04, "end": 7.12, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejorna Ifehir."}, {"start": 7.12, "end": 12.52, "text": " Last year, an amazing neural network-based technique appeared that was able to look at a bunch"}, {"start": 12.52, "end": 18.76, "text": " of unlabeled motion data and learned to weave them together to control the motion of quadrupeds,"}, {"start": 18.76, "end": 20.400000000000002, "text": " like this wolf here."}, {"start": 20.400000000000002, "end": 24.560000000000002, "text": " It was able to successfully address the shortcomings of previous works."}, {"start": 24.56, "end": 30.52, "text": " For instance, the weird sliding motions have been eliminated, and it was also capable of"}, {"start": 30.52, "end": 33.48, "text": " following some predefined trajectories."}, {"start": 33.48, "end": 37.84, "text": " This new paper continues research in this direction by proposing a technique that is"}, {"start": 37.84, "end": 42.879999999999995, "text": " also capable of interacting with its environment or other characters."}, {"start": 42.879999999999995, "end": 48.879999999999995, "text": " For instance, they can punch each other, and after the punch, they can recover from undesirable"}, {"start": 48.879999999999995, "end": 51.16, "text": " positions and more."}, {"start": 51.16, "end": 53.64, "text": " The problem formulation is as follows."}, {"start": 53.64, "end": 58.8, "text": " It is given the current state of the character and the goal, and you see here with blue how"}, {"start": 58.8, "end": 61.52, "text": " it predicts the motion to continue."}, {"start": 61.52, "end": 66.4, "text": " It understands that we have to walk towards the goal that we are likely to fall when hit"}, {"start": 66.4, "end": 73.2, "text": " by a ball, and it knows that then we have to get up and continue our journey and eventually"}, {"start": 73.2, "end": 74.8, "text": " reach our goal."}, {"start": 74.8, "end": 78.16, "text": " Some amazing life advice from the AI right there."}, {"start": 78.16, "end": 83.6, "text": " The goal here is also to learn something meaningful from lots of barely labeled human motion"}, {"start": 83.6, "end": 84.6, "text": " data."}, {"start": 84.6, "end": 90.52, "text": " Barely labeled means that a bunch of videos are given almost as is without additional information"}, {"start": 90.52, "end": 93.6, "text": " on what movements are being performed in these videos."}, {"start": 93.6, "end": 98.6, "text": " If we had labels for all this data that you see here, it would say that this sequence shows"}, {"start": 98.6, "end": 101.72, "text": " a jump, and these ones are for running."}, {"start": 101.72, "end": 107.32, "text": " However, the labeling process takes a ton of time and effort, so if we can get away without"}, {"start": 107.32, "end": 113.75999999999999, "text": " it, that's glorious, but in return, with this, we create an additional burden that the"}, {"start": 113.75999999999999, "end": 115.96, "text": " learning algorithm has to shoulder."}, {"start": 115.96, "end": 119.52, "text": " Unfortunately, the problem gets even worse."}, {"start": 119.52, "end": 125.0, "text": " As you see here, the number of frames contained in the original dataset is very scarce."}, {"start": 125.0, "end": 130.72, "text": " To alleviate this, the authors decided to augment this dataset, which means trying to combine"}, {"start": 130.72, "end": 136.95999999999998, "text": " parts of this data to squeeze out as much information as possible."}, {"start": 136.96, "end": 142.48000000000002, "text": " To see some examples here, how this motion data can be combined from many small segments,"}, {"start": 142.48000000000002, "end": 148.8, "text": " and in the paper, they show that the augmentation helps us create even up to 10 to 30 times"}, {"start": 148.8, "end": 151.84, "text": " more training data for the neural networks."}, {"start": 151.84, "end": 157.96, "text": " As a result of this augmented dataset, it can learn to perform zombie, gorilla movements,"}, {"start": 157.96, "end": 162.28, "text": " chicken hopping, even dribbling with a basketball, you name it."}, {"start": 162.28, "end": 167.44, "text": " But even more, we can give the AI high level commands interactively, and it will try to"}, {"start": 167.44, "end": 171.12, "text": " weave the motions together appropriately."}, {"start": 171.12, "end": 173.64, "text": " They can also punch each other."}, {"start": 173.64, "end": 175.64, "text": " Ow!"}, {"start": 175.64, "end": 179.76, "text": " And all this was learned from a bunch of unorganized data."}, {"start": 179.76, "end": 181.36, "text": " What a time to be alive!"}, {"start": 181.36, "end": 186.72, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms,"}, {"start": 186.72, "end": 189.16, "text": " check out Lambda GPU Cloud."}, {"start": 189.16, "end": 194.2, "text": " I've talked about Lambda's GPU workstations in other videos, and I'm happy to tell you"}, {"start": 194.2, "end": 197.51999999999998, "text": " that they are offering GPU Cloud services as well."}, {"start": 197.51999999999998, "end": 204.48, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 204.48, "end": 210.35999999999999, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser."}, {"start": 210.35999999999999, "end": 216.4, "text": " And finally, hold onto your papers because the Lambda GPU Cloud costs less than half of"}, {"start": 216.4, "end": 218.72, "text": " AWS and Azure."}, {"start": 218.72, "end": 224.2, "text": " Make sure to go to LambdaLabs.com, slash papers, and sign up for one of their amazing GPU"}, {"start": 224.2, "end": 225.6, "text": " instances today."}, {"start": 225.6, "end": 255.16, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=7SM816P5G9s
Is a Realistic Honey Simulation Possible? 🍯
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers 📝 The paper "A Geometrically Consistent Viscous Fluid Solver with Two-Way Fluid-Solid Coupling" is available here: http://gamma.cs.unc.edu/ViscTwoway/ The Weights & Biases posts on the Witness (and code!) are available here: https://www.wandb.com/articles/i-trained-a-robot-to-play-the-witness https://github.com/wandb/witness My earlier video on The Witness game: https://www.youtube.com/watch?v=Ee9vF5eChhU ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu 00:00 Physics simulations are amazing 00:30 More challenges 00:54 The new paper 01:15 Honey simulation 02:11 Even small nuances are correct 02:22 Adaptivity is hard 03:06 A new method for it Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karuijona Ife here. If we study the laws of fluid motion from physics and write a computer program that contains these laws, we can create wondrous fluid simulations like the ones you see here. The amount of detail we can simulate with these programs is increasing every year, not only due to the fact that hardware improves over time, but also the pace of progress in computer graphics research is truly remarkable. So, is there nothing else to do? Are we done with fluid simulation research? Oh no! No, no, no. For instance, fluid solid interaction still remains a challenging phenomenon to simulate. This means that the sand is allowed to have an effect on the fluid, but at the same time, as the fluid sloshes around, it also moves the sand particles within. This is what we refer to as two-way coupling. Note that this previous work that you see here was built on the material point method, a hybrid simulation technique that uses both particles and grids, whereas this new paper introduces proper fluid solid coupling to the simpler grid-based simulators. Not only that, but this new work also shows us that there are different kinds of two-way coupling. If we look at this footage with the honey and the deeper, it looks great, however, this still doesn't seem right to me. We are doing science here, so fortunately, we don't need to guess what seems and what doesn't seem right. This is my favorite part because this is when we let reality BR judge and compare to what exactly happens in the real world. So, let's do that. Whoa! There's quite a bit of a difference because in reality, the honey is able to support the deeper. One-way coupling, of course, cannot simulate this kind of back and forth interaction and neither can weak two-way coupling pull this off. And now, let's see. Yes, there we go. The new strong two-way coupling method finally gets this right. And not only that, but what I really love about this is that it also gets small nuances right. I will try to speed up the footage a little so you can see that the honey doesn't only support the deeper, but the deeper still has some subtle movements both in reality and in the simulation. A plus, love it. So, what is the problem? Why is this so difficult to simulate? One of the key problems here is being able to have a simulation that has a fine resolution in areas where fluid and the solid intersect each other. If we create a super detailed simulation, it will take from hours to days to compute. But on the other hand, if we have a two-course one, it will compute the required deformations in so few of these grid points that will get a really inaccurate simulation and not only that, but we will even miss some of the interactions completely. This paper proposes a neat new volume estimation technique that focuses these computations to where the action happens and only there, which means that we can get these really incredible results even if we only run a relatively coarse simulation. I could watch these GUI viscous simulations all day long. If you have a closer look at the paper in the description, you will find some hard data that shows that this technique executes quicker than other methods that are able to provide comparable results. This episode has been supported by weights and biases. Weight and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. In this blog post, you see that with the help of weights and biases, it is possible to write an AI that plays the witness one of my favorite puzzler games. If you are interested in the game itself, you can also check out my earlier video on it. I know it sounds curious. I indeed made a video about a game on this channel. You can find it in the video description. And also, make sure to visit them through www.wndb.com slash papers or just click the link in the video description and sign up for a freedom of today. Or thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.04, "text": " Dear Fellow Scholars, this is two-minute papers with Karuijona Ife here."}, {"start": 4.04, "end": 7.16, "text": " If we study the laws of fluid motion from physics"}, {"start": 7.16, "end": 10.24, "text": " and write a computer program that contains these laws,"}, {"start": 10.24, "end": 14.280000000000001, "text": " we can create wondrous fluid simulations like the ones you see here."}, {"start": 14.280000000000001, "end": 19.12, "text": " The amount of detail we can simulate with these programs is increasing every year,"}, {"start": 19.12, "end": 22.72, "text": " not only due to the fact that hardware improves over time,"}, {"start": 22.72, "end": 27.96, "text": " but also the pace of progress in computer graphics research is truly remarkable."}, {"start": 27.96, "end": 30.76, "text": " So, is there nothing else to do?"}, {"start": 30.76, "end": 33.68, "text": " Are we done with fluid simulation research?"}, {"start": 33.68, "end": 34.84, "text": " Oh no!"}, {"start": 34.84, "end": 35.92, "text": " No, no, no."}, {"start": 35.92, "end": 41.6, "text": " For instance, fluid solid interaction still remains a challenging phenomenon to simulate."}, {"start": 41.6, "end": 45.16, "text": " This means that the sand is allowed to have an effect on the fluid,"}, {"start": 45.16, "end": 48.480000000000004, "text": " but at the same time, as the fluid sloshes around,"}, {"start": 48.480000000000004, "end": 51.16, "text": " it also moves the sand particles within."}, {"start": 51.16, "end": 54.36, "text": " This is what we refer to as two-way coupling."}, {"start": 54.36, "end": 59.519999999999996, "text": " Note that this previous work that you see here was built on the material point method,"}, {"start": 59.519999999999996, "end": 63.879999999999995, "text": " a hybrid simulation technique that uses both particles and grids,"}, {"start": 63.879999999999995, "end": 68.16, "text": " whereas this new paper introduces proper fluid solid coupling"}, {"start": 68.16, "end": 70.68, "text": " to the simpler grid-based simulators."}, {"start": 70.68, "end": 77.08, "text": " Not only that, but this new work also shows us that there are different kinds of two-way coupling."}, {"start": 77.08, "end": 80.92, "text": " If we look at this footage with the honey and the deeper, it looks great,"}, {"start": 80.92, "end": 84.4, "text": " however, this still doesn't seem right to me."}, {"start": 84.4, "end": 90.68, "text": " We are doing science here, so fortunately, we don't need to guess what seems and what doesn't seem right."}, {"start": 90.68, "end": 95.4, "text": " This is my favorite part because this is when we let reality BR judge"}, {"start": 95.4, "end": 99.04, "text": " and compare to what exactly happens in the real world."}, {"start": 99.04, "end": 100.92, "text": " So, let's do that."}, {"start": 103.88, "end": 105.08, "text": " Whoa!"}, {"start": 105.08, "end": 108.12, "text": " There's quite a bit of a difference because in reality,"}, {"start": 108.12, "end": 111.24000000000001, "text": " the honey is able to support the deeper."}, {"start": 111.24000000000001, "end": 117.08000000000001, "text": " One-way coupling, of course, cannot simulate this kind of back and forth interaction"}, {"start": 117.08000000000001, "end": 120.68, "text": " and neither can weak two-way coupling pull this off."}, {"start": 122.92, "end": 125.32000000000001, "text": " And now, let's see."}, {"start": 125.32000000000001, "end": 127.64, "text": " Yes, there we go."}, {"start": 127.64, "end": 131.96, "text": " The new strong two-way coupling method finally gets this right."}, {"start": 131.96, "end": 137.64000000000001, "text": " And not only that, but what I really love about this is that it also gets small nuances right."}, {"start": 137.64, "end": 141.64, "text": " I will try to speed up the footage a little so you can see that the honey"}, {"start": 141.64, "end": 146.83999999999997, "text": " doesn't only support the deeper, but the deeper still has some subtle movements"}, {"start": 146.83999999999997, "end": 150.6, "text": " both in reality and in the simulation."}, {"start": 150.6, "end": 152.67999999999998, "text": " A plus, love it."}, {"start": 152.67999999999998, "end": 154.6, "text": " So, what is the problem?"}, {"start": 154.6, "end": 157.07999999999998, "text": " Why is this so difficult to simulate?"}, {"start": 157.07999999999998, "end": 160.67999999999998, "text": " One of the key problems here is being able to have a simulation"}, {"start": 160.67999999999998, "end": 166.6, "text": " that has a fine resolution in areas where fluid and the solid intersect each other."}, {"start": 166.6, "end": 171.32, "text": " If we create a super detailed simulation, it will take from hours to days to compute."}, {"start": 171.32, "end": 174.35999999999999, "text": " But on the other hand, if we have a two-course one,"}, {"start": 174.35999999999999, "end": 178.84, "text": " it will compute the required deformations in so few of these grid points"}, {"start": 178.84, "end": 182.76, "text": " that will get a really inaccurate simulation and not only that,"}, {"start": 182.76, "end": 186.84, "text": " but we will even miss some of the interactions completely."}, {"start": 186.84, "end": 190.44, "text": " This paper proposes a neat new volume estimation technique"}, {"start": 190.44, "end": 194.28, "text": " that focuses these computations to where the action happens"}, {"start": 194.28, "end": 199.0, "text": " and only there, which means that we can get these really incredible results"}, {"start": 199.0, "end": 203.24, "text": " even if we only run a relatively coarse simulation."}, {"start": 203.24, "end": 207.72, "text": " I could watch these GUI viscous simulations all day long."}, {"start": 207.72, "end": 210.36, "text": " If you have a closer look at the paper in the description,"}, {"start": 210.36, "end": 215.08, "text": " you will find some hard data that shows that this technique executes quicker"}, {"start": 215.08, "end": 219.0, "text": " than other methods that are able to provide comparable results."}, {"start": 219.0, "end": 222.36, "text": " This episode has been supported by weights and biases."}, {"start": 222.36, "end": 225.72000000000003, "text": " Weight and biases provides tools to track your experiments"}, {"start": 225.72000000000003, "end": 227.4, "text": " in your deep learning projects."}, {"start": 227.4, "end": 231.0, "text": " It can save you a ton of time and money in these projects"}, {"start": 231.0, "end": 234.76000000000002, "text": " and is being used by OpenAI, Toyota Research,"}, {"start": 234.76000000000002, "end": 236.68, "text": " Stanford and Berkeley."}, {"start": 236.68, "end": 240.36, "text": " In this blog post, you see that with the help of weights and biases,"}, {"start": 240.36, "end": 244.36, "text": " it is possible to write an AI that plays the witness"}, {"start": 244.36, "end": 246.44000000000003, "text": " one of my favorite puzzler games."}, {"start": 246.44000000000003, "end": 248.36, "text": " If you are interested in the game itself,"}, {"start": 248.36, "end": 250.84, "text": " you can also check out my earlier video on it."}, {"start": 250.84, "end": 252.6, "text": " I know it sounds curious."}, {"start": 252.6, "end": 255.8, "text": " I indeed made a video about a game on this channel."}, {"start": 255.8, "end": 257.96, "text": " You can find it in the video description."}, {"start": 257.96, "end": 260.12, "text": " And also, make sure to visit them through"}, {"start": 260.12, "end": 266.28000000000003, "text": " www.wndb.com slash papers"}, {"start": 266.28000000000003, "end": 268.76, "text": " or just click the link in the video description"}, {"start": 268.76, "end": 270.92, "text": " and sign up for a freedom of today."}, {"start": 270.92, "end": 274.92, "text": " Or thanks to weights and biases for helping us make better videos for you."}, {"start": 274.92, "end": 277.16, "text": " Thanks for watching and for your generous support"}, {"start": 277.16, "end": 281.24, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=RoGHVI-w9bE
DeepFake Detector AIs Are Good Too!
❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 📝 The paper "FaceForensics++: Learning to Detect Manipulated Facial Images" is available here: http://www.niessnerlab.org/projects/roessler2019faceforensicspp.html ❤️ Watch these videos in early access on our Patreon page or join us here on YouTube: - https://www.patreon.com/TwoMinutePapers - https://www.youtube.com/channel/UCbfYPyITQ-7l4upoX8nvctg/join  🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepFake #DeepFakes #FaceSwap
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. We talked about the technique by the name Phase to Phase back in 2016, approximately 300 videos ago. It was able to take a video of us and transfer our gestures to a target subject. With techniques like this, it's now easier and cheaper than ever to create these deep-fake videos of a target subject provided that we have enough training data, which is almost certainly the case for people who are the most high-value targets for these kinds of operations. Look here, some of these videos are real and some are fake. What do you think? Which is which? Well, here are the results. This one contains artifacts and is hence easy to spot, but the rest, it's tough and it's getting tougher by the day. How many did you get right? Make sure to leave a comment below. However, don't despair, it's not all doom and gloom. Approximately a year ago, in-came Phase 4-N-Zix, a paper that contains a large data set of original and manipulated video pairs. As this offered a ton of training data for real and forged videos, it became possible to train a deep-fake detector. You can see it here in action as these green-to-red colors showcase regions that the AI correctly thinks were tempered with. However, this follow-up paper, by the name Phase 4-N-Zix Plus Plus, contains not only an improved data set, but provides many more valuable insights to help us detect these deep-fake videos and even more. Let's dive in. Key insight number one. As you've seen a minute ago, many of these deep-fakes introduce imperfections or defects to the video. However, most videos that we watch on the internet are compressed and the compression procedure, you have guessed right, also introduces artifacts to the video. From this, it follows that hiding these deep-fake artifacts behind compressed videos sounds like a good strategy to fool humans and detector neural networks likewise, and not only that, but the paper also shows us by how much exactly. Here, you see a table where each row shows the detection accuracy of previous techniques and a new proposed one, and the most interesting part is how this accuracy drops when we go from HQ to LQ or in other words, from a high-quality video to a lower quality one with more compression artifacts. Overall, we can get an 80-95% success rate, which is absolutely amazing. But, of course, you ask, amazing compared to what? Onwards to insight number two. This chart shows how humans fared in deep-fake detection, and as you can see, not too well. Don't forget, the 50% line means that the human gases were as good as a coin flip, which means that they were not doing well at all. Face-to-face hovers around this ratio, and if you look at neural textures, you see that this is a technique that is extremely effective at fooling humans. And wait, what's that? For all the other techniques, we see that the gray bars are shorter, meaning that it's more difficult to find out if a video is a deep-fake because its own artifacts are hidden behind the compression artifacts. But the opposite is the case for neural textures, perhaps because of its small footprint on the videos. Note that the state of the RDetector AI, for instance, the one proposed in this paper, does way better than these 204 human participants. This work does not only introduce a data set, these cool insights, but also introduces a detector neural network. Now, hold on to your papers because this detection pipeline is not only so powerful that it practically eats compressed deep-fakes for breakfast, but it even tells us with remarkable accuracy which method was used to temper with the input footage. Bravo! Now, it is of utmost importance that we let the people know about the existence of these techniques, this is what I'm trying to accomplish with this video. But that's not enough, so I also went to this year's biggest NATO conference and made sure that political and military decision-makers are also informed about this topic. Last year, I went to the European Political Strategy Center with a similar goal. I was so nervous before both of these talks and spent a long time rehearsing them, which delayed a few videos here on the channel. However, because of your support on Patreon, I am in a fortunate situation where I can focus on doing what is right and what is the best for all of us and not worry about the financials all the time. I am really grateful for that, it really is a true privilege. Thank you. If you wish to support us, make sure to click the Patreon link in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 11.16, "text": " We talked about the technique by the name Phase to Phase back in 2016, approximately 300 videos"}, {"start": 11.16, "end": 12.16, "text": " ago."}, {"start": 12.16, "end": 17.0, "text": " It was able to take a video of us and transfer our gestures to a target subject."}, {"start": 17.0, "end": 22.080000000000002, "text": " With techniques like this, it's now easier and cheaper than ever to create these deep-fake"}, {"start": 22.080000000000002, "end": 27.2, "text": " videos of a target subject provided that we have enough training data, which is almost"}, {"start": 27.2, "end": 33.36, "text": " certainly the case for people who are the most high-value targets for these kinds of operations."}, {"start": 33.36, "end": 38.0, "text": " Look here, some of these videos are real and some are fake."}, {"start": 38.0, "end": 39.0, "text": " What do you think?"}, {"start": 39.0, "end": 40.0, "text": " Which is which?"}, {"start": 40.0, "end": 42.64, "text": " Well, here are the results."}, {"start": 42.64, "end": 48.84, "text": " This one contains artifacts and is hence easy to spot, but the rest, it's tough and"}, {"start": 48.84, "end": 50.8, "text": " it's getting tougher by the day."}, {"start": 50.8, "end": 52.36, "text": " How many did you get right?"}, {"start": 52.36, "end": 54.2, "text": " Make sure to leave a comment below."}, {"start": 54.2, "end": 57.88, "text": " However, don't despair, it's not all doom and gloom."}, {"start": 57.88, "end": 64.32000000000001, "text": " Approximately a year ago, in-came Phase 4-N-Zix, a paper that contains a large data set of"}, {"start": 64.32000000000001, "end": 67.4, "text": " original and manipulated video pairs."}, {"start": 67.4, "end": 72.60000000000001, "text": " As this offered a ton of training data for real and forged videos, it became possible"}, {"start": 72.60000000000001, "end": 75.28, "text": " to train a deep-fake detector."}, {"start": 75.28, "end": 81.32000000000001, "text": " You can see it here in action as these green-to-red colors showcase regions that the AI correctly"}, {"start": 81.32000000000001, "end": 83.2, "text": " thinks were tempered with."}, {"start": 83.2, "end": 89.32000000000001, "text": " However, this follow-up paper, by the name Phase 4-N-Zix Plus Plus, contains not only an"}, {"start": 89.32000000000001, "end": 94.92, "text": " improved data set, but provides many more valuable insights to help us detect these deep-fake"}, {"start": 94.92, "end": 97.68, "text": " videos and even more."}, {"start": 97.68, "end": 98.84, "text": " Let's dive in."}, {"start": 98.84, "end": 100.68, "text": " Key insight number one."}, {"start": 100.68, "end": 105.92, "text": " As you've seen a minute ago, many of these deep-fakes introduce imperfections or defects"}, {"start": 105.92, "end": 106.92, "text": " to the video."}, {"start": 106.92, "end": 112.32000000000001, "text": " However, most videos that we watch on the internet are compressed and the compression procedure,"}, {"start": 112.32, "end": 116.36, "text": " you have guessed right, also introduces artifacts to the video."}, {"start": 116.36, "end": 122.28, "text": " From this, it follows that hiding these deep-fake artifacts behind compressed videos sounds"}, {"start": 122.28, "end": 128.07999999999998, "text": " like a good strategy to fool humans and detector neural networks likewise, and not only that,"}, {"start": 128.07999999999998, "end": 131.6, "text": " but the paper also shows us by how much exactly."}, {"start": 131.6, "end": 137.24, "text": " Here, you see a table where each row shows the detection accuracy of previous techniques"}, {"start": 137.24, "end": 143.16, "text": " and a new proposed one, and the most interesting part is how this accuracy drops when we go"}, {"start": 143.16, "end": 150.0, "text": " from HQ to LQ or in other words, from a high-quality video to a lower quality one with more"}, {"start": 150.0, "end": 151.64000000000001, "text": " compression artifacts."}, {"start": 151.64000000000001, "end": 158.08, "text": " Overall, we can get an 80-95% success rate, which is absolutely amazing."}, {"start": 158.08, "end": 162.76000000000002, "text": " But, of course, you ask, amazing compared to what?"}, {"start": 162.76000000000002, "end": 165.16000000000003, "text": " Onwards to insight number two."}, {"start": 165.16, "end": 171.35999999999999, "text": " This chart shows how humans fared in deep-fake detection, and as you can see, not too well."}, {"start": 171.35999999999999, "end": 177.35999999999999, "text": " Don't forget, the 50% line means that the human gases were as good as a coin flip, which"}, {"start": 177.35999999999999, "end": 180.04, "text": " means that they were not doing well at all."}, {"start": 180.04, "end": 185.12, "text": " Face-to-face hovers around this ratio, and if you look at neural textures, you see that"}, {"start": 185.12, "end": 189.48, "text": " this is a technique that is extremely effective at fooling humans."}, {"start": 189.48, "end": 191.72, "text": " And wait, what's that?"}, {"start": 191.72, "end": 196.44, "text": " For all the other techniques, we see that the gray bars are shorter, meaning that it's"}, {"start": 196.44, "end": 201.64, "text": " more difficult to find out if a video is a deep-fake because its own artifacts are hidden"}, {"start": 201.64, "end": 204.16, "text": " behind the compression artifacts."}, {"start": 204.16, "end": 209.16, "text": " But the opposite is the case for neural textures, perhaps because of its small footprint on"}, {"start": 209.16, "end": 210.16, "text": " the videos."}, {"start": 210.16, "end": 215.64, "text": " Note that the state of the RDetector AI, for instance, the one proposed in this paper,"}, {"start": 215.64, "end": 219.76, "text": " does way better than these 204 human participants."}, {"start": 219.76, "end": 225.48, "text": " This work does not only introduce a data set, these cool insights, but also introduces"}, {"start": 225.48, "end": 227.23999999999998, "text": " a detector neural network."}, {"start": 227.23999999999998, "end": 233.16, "text": " Now, hold on to your papers because this detection pipeline is not only so powerful that"}, {"start": 233.16, "end": 239.39999999999998, "text": " it practically eats compressed deep-fakes for breakfast, but it even tells us with remarkable"}, {"start": 239.39999999999998, "end": 244.23999999999998, "text": " accuracy which method was used to temper with the input footage."}, {"start": 244.23999999999998, "end": 245.23999999999998, "text": " Bravo!"}, {"start": 245.24, "end": 249.96, "text": " Now, it is of utmost importance that we let the people know about the existence of these"}, {"start": 249.96, "end": 253.64000000000001, "text": " techniques, this is what I'm trying to accomplish with this video."}, {"start": 253.64000000000001, "end": 258.84000000000003, "text": " But that's not enough, so I also went to this year's biggest NATO conference and made"}, {"start": 258.84000000000003, "end": 264.92, "text": " sure that political and military decision-makers are also informed about this topic."}, {"start": 264.92, "end": 269.88, "text": " Last year, I went to the European Political Strategy Center with a similar goal."}, {"start": 269.88, "end": 275.6, "text": " I was so nervous before both of these talks and spent a long time rehearsing them, which"}, {"start": 275.6, "end": 278.36, "text": " delayed a few videos here on the channel."}, {"start": 278.36, "end": 283.24, "text": " However, because of your support on Patreon, I am in a fortunate situation where I can"}, {"start": 283.24, "end": 288.0, "text": " focus on doing what is right and what is the best for all of us and not worry about the"}, {"start": 288.0, "end": 289.84, "text": " financials all the time."}, {"start": 289.84, "end": 293.88, "text": " I am really grateful for that, it really is a true privilege."}, {"start": 293.88, "end": 294.88, "text": " Thank you."}, {"start": 294.88, "end": 298.92, "text": " If you wish to support us, make sure to click the Patreon link in the video description."}, {"start": 298.92, "end": 303.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=leoRHsBsv6Q
Finally, Style Transfer For Smoke Simulations! 💨
❤️ Check out Lambda here and sign up for their GPU Cloud: https://lambdalabs.com/papers 📝 The paper "Transport-Based Neural Style Transfer for Smoke Simulations" is available here: http://www.byungsoo.me/project/neural-flow-style/index.html 💨 My fluid control paper and its source code (and Blender implementation) is available here, pick it up if you're interested! If I remember correctly, you will have to be able to compile Blender. - https://users.cg.tuwien.ac.at/zsolnai/gfx/real_time_fluid_control_eg/ (If you improved this in some way, please let me know!) Wavelet Turbulence - one of the best papers ever written: http://www.tkim.graphics/WTURB/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
This episode has been supported by Lambda. Dear Fellow Scholars, this is two minute papers with Karo Zsolnai-Fehir. I can confidently say that this is the most excited I've been for a smoke simulation paper since wavelength turbulence. Waveslet turbulence is a magical algorithm from 2008 that takes a low quality fluid or smoke simulation and increases its quality by filling in the remaining details. And here we are, 11 years later, the results still hold up. Insanity. This is one of the best papers ever written and has significantly contributed to my decision to pursue a research career. And this new work performs style transfer for smoke simulations. If you haven't fallen out of your chair yet, let me try to explain why this is amazing. Style transfer is a technique in machine learning research where we have two input images, one for content and one for style and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to the super fun results that you see here. An earlier paper had shown that the more sophisticated ones can make even art curators think that they are real. However, doing this for smoke simulations is a big departure from 2D style transfer because that one takes an image where this works in 3D and does not deal with images but with density fields. A density field means a collection of numbers that describe how dense a smoke plume is at a given spatial position. It is a physical description of a smoke plume, if you will. So how could we possibly apply artistic style from an image to a collection of densities? This doesn't sound possible at all. Unfortunately, the problem gets even worse. Since we typically don't just want to look at a still image of a smoke plume but enjoy a physically correct simulation not only the density fields but the velocity fields and the forces that animate them over time also have to be stylized. Again that's either impossible or almost impossible to do. You see if we run a proper smoke simulation we'll see what would happen in reality but that's not stylized. However, if we stylize we get something that would not happen in Mother Nature. I have spent my master's thesis trying to solve a problem called fluid control which would try to coerce a smoke plume or a piece of fluid to take a given shape, like a bunny or a logo with letters. You can see some footage of what I came up with here. Here both the simulation and the controlling force field is computed in real time on the graphics card and as you see it can be combined with wavelet turbulence. If you wish to hear more about this work make sure to leave a comment but in any case I had a wonderful time working on it if anyone wants to pick it up the paper and the source code and even a blender add on version are available in the video description. In any case in a physics simulation we are trying to simulate reality and for style transfer we are trying to depart from reality. The two are fundamentally incompatible and we have to reconcile them in a way that is somehow still believable. Super challenging. However, back then when I wrote the fluid control paper learning based algorithms were not nearly as developed so it turns out they can help us perform style transfer for density fields and also animate them properly. Again the problem definition is very easy. Sometimes a smoke plume we had an image for style and the style of this image is somehow applied to the density field to get these incredible effects. Just look at these marvelous results. Fire textures, story knight, you name it. It seems to be able to do anything. One of the key ideas is that even though style transfer is challenging on highly detailed density fields it becomes much easier if we first down sample the density field to a coarser version, perform the style transfer there and up sample this density field again with already existing techniques. Rinse and repeat. The paper also describes a smoothing technique that ensures that the changes in the velocity fields that guide our density fields change slowly over time to keep the animation believable. There are a lot more new ideas in the paper so make sure to have a look. It also takes a while the computation time is typically around 10 to 15 minutes per frame but who cares today with the ingenuity of research scientists and the power of machine learning algorithms even the impossible seems possible. If it takes 15 minutes per frame so be it. What a time to be alive. If you are a researcher or a startup looking for cheap GPU compute to run these algorithms check out Lambda GPU Cloud. I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you that they are offering GPU Cloud services as well. The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19. Lambda's web-based IDE lets you easily access your instance right in your browser and finally hold on to your papers because the Lambda GPU Cloud costs less than half of AWS and Azure. Make sure to go to LambdaLabs.com slash papers and sign up for one of their amazing GPU instances today. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 3.04, "text": " This episode has been supported by Lambda."}, {"start": 3.04, "end": 7.04, "text": " Dear Fellow Scholars, this is two minute papers with Karo Zsolnai-Fehir."}, {"start": 7.04, "end": 12.52, "text": " I can confidently say that this is the most excited I've been for a smoke simulation paper"}, {"start": 12.52, "end": 14.52, "text": " since wavelength turbulence."}, {"start": 14.52, "end": 20.68, "text": " Waveslet turbulence is a magical algorithm from 2008 that takes a low quality fluid or"}, {"start": 20.68, "end": 26.48, "text": " smoke simulation and increases its quality by filling in the remaining details."}, {"start": 26.48, "end": 31.2, "text": " And here we are, 11 years later, the results still hold up."}, {"start": 31.2, "end": 32.2, "text": " Insanity."}, {"start": 32.2, "end": 37.8, "text": " This is one of the best papers ever written and has significantly contributed to my decision"}, {"start": 37.8, "end": 39.96, "text": " to pursue a research career."}, {"start": 39.96, "end": 44.84, "text": " And this new work performs style transfer for smoke simulations."}, {"start": 44.84, "end": 50.28, "text": " If you haven't fallen out of your chair yet, let me try to explain why this is amazing."}, {"start": 50.28, "end": 55.120000000000005, "text": " Style transfer is a technique in machine learning research where we have two input images,"}, {"start": 55.12, "end": 60.879999999999995, "text": " one for content and one for style and the output is our content image reimagined with"}, {"start": 60.879999999999995, "end": 62.12, "text": " this new style."}, {"start": 62.12, "end": 67.24, "text": " The cool part is that the content can be a photo straight from our camera and the style"}, {"start": 67.24, "end": 71.8, "text": " can be a painting which leads to the super fun results that you see here."}, {"start": 71.8, "end": 76.75999999999999, "text": " An earlier paper had shown that the more sophisticated ones can make even art curators"}, {"start": 76.75999999999999, "end": 78.67999999999999, "text": " think that they are real."}, {"start": 78.67999999999999, "end": 83.6, "text": " However, doing this for smoke simulations is a big departure from 2D style transfer"}, {"start": 83.6, "end": 89.91999999999999, "text": " because that one takes an image where this works in 3D and does not deal with images but"}, {"start": 89.91999999999999, "end": 91.39999999999999, "text": " with density fields."}, {"start": 91.39999999999999, "end": 97.39999999999999, "text": " A density field means a collection of numbers that describe how dense a smoke plume is at"}, {"start": 97.39999999999999, "end": 99.03999999999999, "text": " a given spatial position."}, {"start": 99.03999999999999, "end": 102.39999999999999, "text": " It is a physical description of a smoke plume, if you will."}, {"start": 102.39999999999999, "end": 108.52, "text": " So how could we possibly apply artistic style from an image to a collection of densities?"}, {"start": 108.52, "end": 110.91999999999999, "text": " This doesn't sound possible at all."}, {"start": 110.92, "end": 114.28, "text": " Unfortunately, the problem gets even worse."}, {"start": 114.28, "end": 118.72, "text": " Since we typically don't just want to look at a still image of a smoke plume but enjoy"}, {"start": 118.72, "end": 124.12, "text": " a physically correct simulation not only the density fields but the velocity fields"}, {"start": 124.12, "end": 128.96, "text": " and the forces that animate them over time also have to be stylized."}, {"start": 128.96, "end": 133.84, "text": " Again that's either impossible or almost impossible to do."}, {"start": 133.84, "end": 140.08, "text": " You see if we run a proper smoke simulation we'll see what would happen in reality but"}, {"start": 140.08, "end": 141.56, "text": " that's not stylized."}, {"start": 141.56, "end": 146.92000000000002, "text": " However, if we stylize we get something that would not happen in Mother Nature."}, {"start": 146.92000000000002, "end": 151.84, "text": " I have spent my master's thesis trying to solve a problem called fluid control which"}, {"start": 151.84, "end": 158.28, "text": " would try to coerce a smoke plume or a piece of fluid to take a given shape, like a bunny"}, {"start": 158.28, "end": 160.24, "text": " or a logo with letters."}, {"start": 160.24, "end": 163.64000000000001, "text": " You can see some footage of what I came up with here."}, {"start": 163.64000000000001, "end": 169.36, "text": " Here both the simulation and the controlling force field is computed in real time on the"}, {"start": 169.36, "end": 174.28, "text": " graphics card and as you see it can be combined with wavelet turbulence."}, {"start": 174.28, "end": 178.56, "text": " If you wish to hear more about this work make sure to leave a comment but in any case"}, {"start": 178.56, "end": 183.60000000000002, "text": " I had a wonderful time working on it if anyone wants to pick it up the paper and the source"}, {"start": 183.60000000000002, "end": 188.76000000000002, "text": " code and even a blender add on version are available in the video description."}, {"start": 188.76000000000002, "end": 194.52, "text": " In any case in a physics simulation we are trying to simulate reality and for style transfer"}, {"start": 194.52, "end": 197.28000000000003, "text": " we are trying to depart from reality."}, {"start": 197.28, "end": 202.52, "text": " The two are fundamentally incompatible and we have to reconcile them in a way that"}, {"start": 202.52, "end": 205.52, "text": " is somehow still believable."}, {"start": 205.52, "end": 206.52, "text": " Super challenging."}, {"start": 206.52, "end": 211.6, "text": " However, back then when I wrote the fluid control paper learning based algorithms were"}, {"start": 211.6, "end": 217.92000000000002, "text": " not nearly as developed so it turns out they can help us perform style transfer for density"}, {"start": 217.92000000000002, "end": 221.16, "text": " fields and also animate them properly."}, {"start": 221.16, "end": 224.52, "text": " Again the problem definition is very easy."}, {"start": 224.52, "end": 230.28, "text": " Sometimes a smoke plume we had an image for style and the style of this image is somehow"}, {"start": 230.28, "end": 234.72, "text": " applied to the density field to get these incredible effects."}, {"start": 234.72, "end": 237.48000000000002, "text": " Just look at these marvelous results."}, {"start": 237.48000000000002, "end": 240.44, "text": " Fire textures, story knight, you name it."}, {"start": 240.44, "end": 242.84, "text": " It seems to be able to do anything."}, {"start": 242.84, "end": 247.92000000000002, "text": " One of the key ideas is that even though style transfer is challenging on highly detailed"}, {"start": 247.92000000000002, "end": 254.12, "text": " density fields it becomes much easier if we first down sample the density field to a"}, {"start": 254.12, "end": 260.04, "text": " coarser version, perform the style transfer there and up sample this density field again"}, {"start": 260.04, "end": 262.24, "text": " with already existing techniques."}, {"start": 262.24, "end": 263.8, "text": " Rinse and repeat."}, {"start": 263.8, "end": 268.96, "text": " The paper also describes a smoothing technique that ensures that the changes in the velocity"}, {"start": 268.96, "end": 275.48, "text": " fields that guide our density fields change slowly over time to keep the animation believable."}, {"start": 275.48, "end": 279.44, "text": " There are a lot more new ideas in the paper so make sure to have a look."}, {"start": 279.44, "end": 284.92, "text": " It also takes a while the computation time is typically around 10 to 15 minutes per frame"}, {"start": 284.92, "end": 290.44, "text": " but who cares today with the ingenuity of research scientists and the power of machine learning"}, {"start": 290.44, "end": 294.32, "text": " algorithms even the impossible seems possible."}, {"start": 294.32, "end": 297.68, "text": " If it takes 15 minutes per frame so be it."}, {"start": 297.68, "end": 299.32, "text": " What a time to be alive."}, {"start": 299.32, "end": 304.6, "text": " If you are a researcher or a startup looking for cheap GPU compute to run these algorithms"}, {"start": 304.6, "end": 307.36, "text": " check out Lambda GPU Cloud."}, {"start": 307.36, "end": 312.16, "text": " I've talked about Lambda's GPU workstations in other videos and I'm happy to tell you"}, {"start": 312.16, "end": 315.48, "text": " that they are offering GPU Cloud services as well."}, {"start": 315.48, "end": 322.68, "text": " The Lambda GPU Cloud can train ImageNet to 93% accuracy for less than $19."}, {"start": 322.68, "end": 328.32, "text": " Lambda's web-based IDE lets you easily access your instance right in your browser and"}, {"start": 328.32, "end": 335.52000000000004, "text": " finally hold on to your papers because the Lambda GPU Cloud costs less than half of AWS"}, {"start": 335.52000000000004, "end": 336.68, "text": " and Azure."}, {"start": 336.68, "end": 342.72, "text": " Make sure to go to LambdaLabs.com slash papers and sign up for one of their amazing GPU instances"}, {"start": 342.72, "end": 343.72, "text": " today."}, {"start": 343.72, "end": 373.28000000000003, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Uk9p4Kk98_g
Google's AI Plays Football…For Science! ⚽️
📝 The paper "Google Research Football: A Novel Reinforcement Learning Environment" is available here: https://arxiv.org/abs/1907.11180 https://github.com/google-research/football https://ai.googleblog.com/2019/06/introducing-google-research-football.html ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Reinforcement learning is an important subfield within machine learning research where we teach an agent to choose a set of actions in an environment to maximize the score. This enables these AIs to play Atari games at a superhuman level, control drones, robot arms, or even create self-driving cars. A few episodes ago, we talked about deep minds behavior suite that opened up the possibility of measuring how these AIs perform with respect to the seven core capabilities of reinforcement learning algorithms. Among them were how well such an AI performs when being shown a new problem, how well or how much they memorize, how willing they are to explore novel solutions, how well they scale to larger problems, and more. In the meantime, the Google Brain Research Team has also been busy creating a physics-based 3D football, or for some of you, soccer simulation, where we can ask an AI to control one or multiple players in this virtual environment. This is a particularly difficult task because it requires finding a delicate balance between rudimentary short-term control tasks like passing and long-term strategic planning. In this environment, we can also test our reinforcement learning agents against handcrafted rule-based teams. For instance, here you can see that deep minds in Paula algorithm is the only one that can reliably beat the medium and hard handcrafted teams, specifically the one that was run for 500 million training steps. The easy case is tuned to be suitable for single-machine research works where the hard case is meant to challenge sophisticated AIs that were trained on a massive array of machines. I like this idea a lot. Another design decision, I particularly like here, is that these agents can be trained from pixels or internal game state. Okay, so what does that really mean? Training from pixels is easy to understand, but very hard to perform. This simply means that the agents see the same content as what we see on the screen. Deep minds deep reinforcement learning is able to do this by training a neural network to understand what events take place on the screen and passes, no pun intended, all this event information to a reinforcement learner that is responsible for the strategic gameplay related decisions. Now, what about the other one? The internal game state learning means that the algorithm sees a bunch of numbers which relate to quantities within the game, such as the position of all the players and the ball, the current score, and so on. This is typically easier to perform because the AI is given high quality and relevant information and is not burdened with the task of visually parsing the entire scene. For instance, OpenAI's amazing Dota 2 team learned this way. Of course, to maximize impact, the source code for this project is also available. This will not only help researchers to train and test their own reinforcement learning algorithms on a challenging scenario, but they can also extend it and make up their own scenarios. Now note that so far, I tried my hardest not to comment on the names of the players and the teams, but my will to resist just ran out. So, go realbations! Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.16, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.16, "end": 8.36, "text": " Reinforcement learning is an important subfield within machine learning research"}, {"start": 8.36, "end": 13.68, "text": " where we teach an agent to choose a set of actions in an environment to maximize the score."}, {"start": 13.68, "end": 18.84, "text": " This enables these AIs to play Atari games at a superhuman level,"}, {"start": 18.84, "end": 24.560000000000002, "text": " control drones, robot arms, or even create self-driving cars."}, {"start": 24.560000000000002, "end": 28.36, "text": " A few episodes ago, we talked about deep minds behavior suite"}, {"start": 28.36, "end": 32.72, "text": " that opened up the possibility of measuring how these AIs perform"}, {"start": 32.72, "end": 37.64, "text": " with respect to the seven core capabilities of reinforcement learning algorithms."}, {"start": 37.64, "end": 42.96, "text": " Among them were how well such an AI performs when being shown a new problem,"}, {"start": 42.96, "end": 48.760000000000005, "text": " how well or how much they memorize, how willing they are to explore novel solutions,"}, {"start": 48.760000000000005, "end": 52.2, "text": " how well they scale to larger problems, and more."}, {"start": 52.2, "end": 55.84, "text": " In the meantime, the Google Brain Research Team has also been busy"}, {"start": 55.84, "end": 61.440000000000005, "text": " creating a physics-based 3D football, or for some of you, soccer simulation,"}, {"start": 61.440000000000005, "end": 66.92, "text": " where we can ask an AI to control one or multiple players in this virtual environment."}, {"start": 66.92, "end": 72.16, "text": " This is a particularly difficult task because it requires finding a delicate balance"}, {"start": 72.16, "end": 78.72, "text": " between rudimentary short-term control tasks like passing and long-term strategic planning."}, {"start": 78.72, "end": 82.60000000000001, "text": " In this environment, we can also test our reinforcement learning agents"}, {"start": 82.60000000000001, "end": 85.60000000000001, "text": " against handcrafted rule-based teams."}, {"start": 85.6, "end": 89.19999999999999, "text": " For instance, here you can see that deep minds in Paula algorithm"}, {"start": 89.19999999999999, "end": 94.08, "text": " is the only one that can reliably beat the medium and hard handcrafted teams,"}, {"start": 94.08, "end": 99.39999999999999, "text": " specifically the one that was run for 500 million training steps."}, {"start": 99.39999999999999, "end": 104.16, "text": " The easy case is tuned to be suitable for single-machine research works"}, {"start": 104.16, "end": 109.28, "text": " where the hard case is meant to challenge sophisticated AIs that were trained"}, {"start": 109.28, "end": 111.67999999999999, "text": " on a massive array of machines."}, {"start": 111.67999999999999, "end": 113.52, "text": " I like this idea a lot."}, {"start": 113.52, "end": 116.39999999999999, "text": " Another design decision, I particularly like here,"}, {"start": 116.39999999999999, "end": 121.44, "text": " is that these agents can be trained from pixels or internal game state."}, {"start": 121.44, "end": 124.0, "text": " Okay, so what does that really mean?"}, {"start": 124.0, "end": 128.4, "text": " Training from pixels is easy to understand, but very hard to perform."}, {"start": 128.4, "end": 133.6, "text": " This simply means that the agents see the same content as what we see on the screen."}, {"start": 133.6, "end": 137.12, "text": " Deep minds deep reinforcement learning is able to do this"}, {"start": 137.12, "end": 142.32, "text": " by training a neural network to understand what events take place on the screen"}, {"start": 142.32, "end": 147.68, "text": " and passes, no pun intended, all this event information to a reinforcement learner"}, {"start": 147.68, "end": 151.6, "text": " that is responsible for the strategic gameplay related decisions."}, {"start": 152.4, "end": 154.4, "text": " Now, what about the other one?"}, {"start": 154.4, "end": 159.12, "text": " The internal game state learning means that the algorithm sees a bunch of numbers"}, {"start": 159.12, "end": 161.6, "text": " which relate to quantities within the game,"}, {"start": 161.6, "end": 166.95999999999998, "text": " such as the position of all the players and the ball, the current score, and so on."}, {"start": 166.95999999999998, "end": 171.51999999999998, "text": " This is typically easier to perform because the AI is given high quality"}, {"start": 171.52, "end": 177.76000000000002, "text": " and relevant information and is not burdened with the task of visually parsing the entire scene."}, {"start": 177.76000000000002, "end": 182.32000000000002, "text": " For instance, OpenAI's amazing Dota 2 team learned this way."}, {"start": 182.32000000000002, "end": 187.20000000000002, "text": " Of course, to maximize impact, the source code for this project is also available."}, {"start": 187.20000000000002, "end": 193.12, "text": " This will not only help researchers to train and test their own reinforcement learning algorithms"}, {"start": 193.12, "end": 198.64000000000001, "text": " on a challenging scenario, but they can also extend it and make up their own scenarios."}, {"start": 198.64, "end": 205.11999999999998, "text": " Now note that so far, I tried my hardest not to comment on the names of the players and the teams,"}, {"start": 205.11999999999998, "end": 207.76, "text": " but my will to resist just ran out."}, {"start": 207.76, "end": 210.0, "text": " So, go realbations!"}, {"start": 210.0, "end": 237.84, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=0OtZ8dUFxXA
OpenAI’s GPT-2 Is Now Available - It Is Wise as a Scholar! 🎓
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Weights & Biases blog post (the notebook is available too!) - https://www.wandb.com/articles/visualize-xgboost-in-one-line - https://colab.research.google.com/drive/1SPludDkpAPonmdDdRlIJ7d5jB2FwVfZn#scrollTo=kszahG2PBN5R Try GPT-2 yourself here and post your results in the comments if you've found anything interesting. https://talktotransformer.com/ Check out this GPT-2 implementation too (thanks Robert Miles for the link!) - write something, then tab, enter, tab, enter and so on: https://transformer.huggingface.co/doc/gpt2-large OpenAI's post: https://openai.com/blog/gpt-2-6-month-follow-up/ Tweet source: https://twitter.com/gdm3000/status/1151469462614368256 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #OpenAI #GPT2 #GPT3
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. OpenAI GPT2 is a learning-based technique which can perform common natural language processing operations, for instance, answering questions, completing text, reading comprehension, summarization, and more. What is absolutely amazing about this technique is that it is able to perform these tasks with as little supervision as possible. This means that we unleash the algorithm that they call GPT2 to read the internet, and it will learn our language by itself. To perform this, of course, we need a lot of training data, and here the AR reads 40GB of internet text, which is 40GB of non-binary plain-text data, which is a stupendously large amount of text. You see here an example of how it fares with text completion, and the results were absolutely amazing. And not only that, because hold onto your papers, we can now try it ourselves, and my results show that it can be as wise as a scholar, and it can also cheat like nobody's business. So, let's try something simple here. First, as everyone is talking about job security, I'd like to give it the start of this two-minute paper script, and see if I am about to be replaced. Let's see. It continues the intro with, the history of Eastern Europe since 1989 has been an interesting and complex story spanning four decades. On one side you have a regime of Western-backed dictators from the mid-1990s, and so on. Very scholarly indeed. However, I think I might be safe for now. And now, something that I wanted to try since the inception of GPT-2, which is, feed it a snippet of something that I said about wavelet turbulence, which is one of the best papers ever written, and have the AI finish it. I tried to frame it as an exam question, and was not disappointed by the results. So, the text is, there was a similar technique by the name wavelet turbulence, which is, I hope you see what I did there, and look what it did. There was a similar technique by the name wavelet turbulence, which is not at all what we are talking about. But I let you decide whether it is worth your time to read it. Wow! GPT-2 has been caught cheating, red-handed. I love it. Let's continue what it had to say. Anyway, my aim here is to describe how we use a fluid simulation to simulate smoke and liquid effects using just a few simple equations. Solving a few functions gives us a set of derivatives of the particle trajectories, that is, equations that describe how and where the particles will come and go across the liquid in the simulation. For simplicity, as a good teacher of high school students should know, we can use a simple equation which simulates two particles that collide. I am stunned. It recognized that we are talking about fluid simulations, which is already remarkable, but it went much further. The completion is not bad at all, and is not only coherent, on-popic, but has quite a bit of truth to it. I will have to rethink my previous claim about my job security. The even crazier thing is that the size of this model is about 750 million parameters, which is only half of the size of the original full model, which is expected to be even better. I put a link to this website in the video description for your pleasure. Make sure to play with it. This is mad fun. And GPT2 will also see so many applications that we cannot even fathom yet. For instance, here you can see that one can train it on many source code files on GitHub, and it will be able to complete the code that we write on the fly. Now, nobody should think of this as GPT2 writing programs for us. This is, of course, unlikely. However, it will ease the process for novice and expert users alike. If you have any other novel applications in mind, make sure to leave a comment below. For now, Bravo OpenAI and a big thank you for Danielle King and the HuggingFace company for this super convenient public implementation. Let the experiments begin. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford, and Berkeley. It is very easy to set up. In fact, this blog post shows how we can use their framework to visualize our progress using XG Boost, a popular library for machine learning models. Get ready, because this is quite possibly the shortest blog post that you have seen. Yep, that was basically it. I don't think it can get any easier. Make sure to visit them through WendeeB.com slash papers, www.wendeeB.com slash papers, or just click the link in the video description and sign up for a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.3, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.3, "end": 11.5, "text": " OpenAI GPT2 is a learning-based technique which can perform common natural language processing operations,"}, {"start": 11.5, "end": 19.5, "text": " for instance, answering questions, completing text, reading comprehension, summarization, and more."}, {"start": 19.5, "end": 27.5, "text": " What is absolutely amazing about this technique is that it is able to perform these tasks with as little supervision as possible."}, {"start": 27.5, "end": 35.8, "text": " This means that we unleash the algorithm that they call GPT2 to read the internet, and it will learn our language by itself."}, {"start": 35.8, "end": 43.5, "text": " To perform this, of course, we need a lot of training data, and here the AR reads 40GB of internet text,"}, {"start": 43.5, "end": 49.8, "text": " which is 40GB of non-binary plain-text data, which is a stupendously large amount of text."}, {"start": 49.8, "end": 57.2, "text": " You see here an example of how it fares with text completion, and the results were absolutely amazing."}, {"start": 57.2, "end": 66.9, "text": " And not only that, because hold onto your papers, we can now try it ourselves, and my results show that it can be as wise as a scholar,"}, {"start": 66.9, "end": 70.0, "text": " and it can also cheat like nobody's business."}, {"start": 70.0, "end": 72.5, "text": " So, let's try something simple here."}, {"start": 72.5, "end": 81.7, "text": " First, as everyone is talking about job security, I'd like to give it the start of this two-minute paper script, and see if I am about to be replaced."}, {"start": 81.7, "end": 82.7, "text": " Let's see."}, {"start": 82.7, "end": 93.0, "text": " It continues the intro with, the history of Eastern Europe since 1989 has been an interesting and complex story spanning four decades."}, {"start": 93.0, "end": 99.5, "text": " On one side you have a regime of Western-backed dictators from the mid-1990s, and so on."}, {"start": 99.5, "end": 101.30000000000001, "text": " Very scholarly indeed."}, {"start": 101.30000000000001, "end": 105.0, "text": " However, I think I might be safe for now."}, {"start": 105.0, "end": 114.6, "text": " And now, something that I wanted to try since the inception of GPT-2, which is, feed it a snippet of something that I said about wavelet turbulence,"}, {"start": 114.6, "end": 119.3, "text": " which is one of the best papers ever written, and have the AI finish it."}, {"start": 119.3, "end": 124.3, "text": " I tried to frame it as an exam question, and was not disappointed by the results."}, {"start": 124.3, "end": 130.8, "text": " So, the text is, there was a similar technique by the name wavelet turbulence, which is,"}, {"start": 130.8, "end": 134.6, "text": " I hope you see what I did there, and look what it did."}, {"start": 134.6, "end": 140.5, "text": " There was a similar technique by the name wavelet turbulence, which is not at all what we are talking about."}, {"start": 140.5, "end": 144.5, "text": " But I let you decide whether it is worth your time to read it."}, {"start": 144.5, "end": 145.5, "text": " Wow!"}, {"start": 145.5, "end": 149.2, "text": " GPT-2 has been caught cheating, red-handed."}, {"start": 149.2, "end": 150.2, "text": " I love it."}, {"start": 150.2, "end": 152.2, "text": " Let's continue what it had to say."}, {"start": 152.2, "end": 162.1, "text": " Anyway, my aim here is to describe how we use a fluid simulation to simulate smoke and liquid effects using just a few simple equations."}, {"start": 162.1, "end": 167.4, "text": " Solving a few functions gives us a set of derivatives of the particle trajectories,"}, {"start": 167.4, "end": 174.79999999999998, "text": " that is, equations that describe how and where the particles will come and go across the liquid in the simulation."}, {"start": 174.79999999999998, "end": 178.7, "text": " For simplicity, as a good teacher of high school students should know,"}, {"start": 178.7, "end": 183.79999999999998, "text": " we can use a simple equation which simulates two particles that collide."}, {"start": 183.79999999999998, "end": 185.4, "text": " I am stunned."}, {"start": 185.4, "end": 192.3, "text": " It recognized that we are talking about fluid simulations, which is already remarkable, but it went much further."}, {"start": 192.3, "end": 199.5, "text": " The completion is not bad at all, and is not only coherent, on-popic, but has quite a bit of truth to it."}, {"start": 199.5, "end": 203.70000000000002, "text": " I will have to rethink my previous claim about my job security."}, {"start": 203.70000000000002, "end": 209.8, "text": " The even crazier thing is that the size of this model is about 750 million parameters,"}, {"start": 209.8, "end": 216.0, "text": " which is only half of the size of the original full model, which is expected to be even better."}, {"start": 216.0, "end": 219.8, "text": " I put a link to this website in the video description for your pleasure."}, {"start": 219.8, "end": 221.3, "text": " Make sure to play with it."}, {"start": 221.3, "end": 223.10000000000002, "text": " This is mad fun."}, {"start": 223.10000000000002, "end": 228.9, "text": " And GPT2 will also see so many applications that we cannot even fathom yet."}, {"start": 228.9, "end": 234.10000000000002, "text": " For instance, here you can see that one can train it on many source code files on GitHub,"}, {"start": 234.10000000000002, "end": 238.60000000000002, "text": " and it will be able to complete the code that we write on the fly."}, {"start": 238.6, "end": 243.4, "text": " Now, nobody should think of this as GPT2 writing programs for us."}, {"start": 243.4, "end": 245.5, "text": " This is, of course, unlikely."}, {"start": 245.5, "end": 250.0, "text": " However, it will ease the process for novice and expert users alike."}, {"start": 250.0, "end": 254.5, "text": " If you have any other novel applications in mind, make sure to leave a comment below."}, {"start": 254.5, "end": 261.0, "text": " For now, Bravo OpenAI and a big thank you for Danielle King and the HuggingFace company"}, {"start": 261.0, "end": 263.8, "text": " for this super convenient public implementation."}, {"start": 263.8, "end": 265.7, "text": " Let the experiments begin."}, {"start": 265.7, "end": 269.09999999999997, "text": " This episode has been supported by weights and biases."}, {"start": 269.09999999999997, "end": 274.09999999999997, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 274.09999999999997, "end": 277.59999999999997, "text": " It can save you a ton of time and money in these projects"}, {"start": 277.59999999999997, "end": 283.09999999999997, "text": " and is being used by OpenAI, Toyota Research, Stanford, and Berkeley."}, {"start": 283.09999999999997, "end": 285.0, "text": " It is very easy to set up."}, {"start": 285.0, "end": 290.2, "text": " In fact, this blog post shows how we can use their framework to visualize our progress"}, {"start": 290.2, "end": 294.5, "text": " using XG Boost, a popular library for machine learning models."}, {"start": 294.5, "end": 299.6, "text": " Get ready, because this is quite possibly the shortest blog post that you have seen."}, {"start": 299.6, "end": 301.6, "text": " Yep, that was basically it."}, {"start": 301.6, "end": 303.8, "text": " I don't think it can get any easier."}, {"start": 303.8, "end": 307.5, "text": " Make sure to visit them through WendeeB.com slash papers,"}, {"start": 307.5, "end": 311.0, "text": " www.wendeeB.com slash papers,"}, {"start": 311.0, "end": 315.4, "text": " or just click the link in the video description and sign up for a free demo today."}, {"start": 315.4, "end": 319.5, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 319.5, "end": 324.5, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=zrF5_O92ELQ
These Are The 7 Capabilities Every AI Should Have
❤️ Thank you so much for your support on Patreon: https://www.patreon.com/TwoMinutePapers 📝 The paper "Behaviour Suite for Reinforcement Learning" is available here: https://arxiv.org/abs/1908.03568 https://github.com/deepmind/bsuite 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Jolnai-Fahir. A few years ago, scientists at DeepMind published a learning algorithm that they called Deep Re-Inforcement Learning, which quickly took the world by storm. This technique is a combination of a neural network that processes the visual data that we see on the screen and a reinforcement learner that comes up with the gameplay-related decisions which proved to be able to reach superhuman performance on computer games like Atari Breakout. This paper not only sparked quite a bit of mainstream media interest, but also provided fertile grounds for new follow-up research works to emerge. For instance, one of these follow-up papers infused these agents with a very human-like quality, curiosity, further improving many aspects of the original learning method. However, had this advantage, I kid you not, it got addicted to the TV and kept staring at it forever. This was perhaps a little too human-like. In any case, you may rest assured that this shortcoming has been remedied since and every follow-up paper recorded their scores on a set of Atari games. Measuring and comparing is an important part of research and is absolutely necessary so we can compare new learning methods more objectively. It's like recording your time for the Olympics at the 100-meter dash. In that case, it is quite easy to decide which athlete is the best. However, this is not so easy in AI research. In this paper, scientists at DeepMind note that just recording the scores doesn't give us enough information anymore. There's so much more to reinforcement learning algorithms than just scores. So, they built a behavior suite that also evaluates the seven core capabilities of reinforcement learning algorithms. Among these seven core capabilities, they list generalization which tells us how well the agent is expected to do in previously unseen environments, how good it is at credit assignment, which is a prominent problem in reinforcement learning. Credit assignment is very tricky to solve because, for instance, when we play a strategy game, we need to make a long sequence of strategic decisions and in the end, if we lose an hour later, we have to figure out which one of these many many decisions led to our loss. Measuring this as one of the core capabilities was, in my opinion, a great design decision here. How well the algorithm scales to larger problems also gets a spot as one of these core capabilities. I hope this testing suite will see widespread adoption in reinforcement learning research and what I am really looking forward to is seeing these radar plots for newer algorithms which will quickly reveal whether we have a new method that takes a different trade-off than previous methods or, in other words, has the same area within the polygon but with a different shape or, in the case of a real breakthrough, the area of these polygons will start to increase. Luckily, a few of these charts are already available in the paper and they give us so much information about these methods. I could stare at them all day long and I cannot wait to see some newer methods appear here. Now, note that there is a lot more to this paper. If you have a look at it in the video description, you will also find the experiments that are part of this suite, what makes a good environment to test these agents in, and that they plan to form a committee of prominent researchers to periodically review it. I love that part. If you enjoyed this video, please consider supporting us on Patreon. If you do, we can offer you early access to these videos so you can watch them before anyone else or you can also get your name immortalized in the video description. Just click the link in the description if you wish to chip in. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Jolnai-Fahir."}, {"start": 4.0, "end": 9.56, "text": " A few years ago, scientists at DeepMind published a learning algorithm that they called"}, {"start": 9.56, "end": 13.6, "text": " Deep Re-Inforcement Learning, which quickly took the world by storm."}, {"start": 13.6, "end": 18.44, "text": " This technique is a combination of a neural network that processes the visual data that"}, {"start": 18.44, "end": 23.96, "text": " we see on the screen and a reinforcement learner that comes up with the gameplay-related"}, {"start": 23.96, "end": 29.84, "text": " decisions which proved to be able to reach superhuman performance on computer games like"}, {"start": 29.84, "end": 31.56, "text": " Atari Breakout."}, {"start": 31.56, "end": 36.68, "text": " This paper not only sparked quite a bit of mainstream media interest, but also provided"}, {"start": 36.68, "end": 40.6, "text": " fertile grounds for new follow-up research works to emerge."}, {"start": 40.6, "end": 46.68, "text": " For instance, one of these follow-up papers infused these agents with a very human-like quality,"}, {"start": 46.68, "end": 51.760000000000005, "text": " curiosity, further improving many aspects of the original learning method."}, {"start": 51.760000000000005, "end": 57.92, "text": " However, had this advantage, I kid you not, it got addicted to the TV and kept staring"}, {"start": 57.92, "end": 59.519999999999996, "text": " at it forever."}, {"start": 59.52, "end": 62.56, "text": " This was perhaps a little too human-like."}, {"start": 62.56, "end": 67.32000000000001, "text": " In any case, you may rest assured that this shortcoming has been remedied since and every"}, {"start": 67.32000000000001, "end": 72.48, "text": " follow-up paper recorded their scores on a set of Atari games."}, {"start": 72.48, "end": 77.52000000000001, "text": " Measuring and comparing is an important part of research and is absolutely necessary so"}, {"start": 77.52000000000001, "end": 81.32000000000001, "text": " we can compare new learning methods more objectively."}, {"start": 81.32000000000001, "end": 85.76, "text": " It's like recording your time for the Olympics at the 100-meter dash."}, {"start": 85.76, "end": 90.24000000000001, "text": " In that case, it is quite easy to decide which athlete is the best."}, {"start": 90.24000000000001, "end": 94.4, "text": " However, this is not so easy in AI research."}, {"start": 94.4, "end": 99.88000000000001, "text": " In this paper, scientists at DeepMind note that just recording the scores doesn't give"}, {"start": 99.88000000000001, "end": 101.88000000000001, "text": " us enough information anymore."}, {"start": 101.88000000000001, "end": 106.4, "text": " There's so much more to reinforcement learning algorithms than just scores."}, {"start": 106.4, "end": 112.88000000000001, "text": " So, they built a behavior suite that also evaluates the seven core capabilities of reinforcement"}, {"start": 112.88000000000001, "end": 114.4, "text": " learning algorithms."}, {"start": 114.4, "end": 120.32000000000001, "text": " Among these seven core capabilities, they list generalization which tells us how well"}, {"start": 120.32000000000001, "end": 126.28, "text": " the agent is expected to do in previously unseen environments, how good it is at credit"}, {"start": 126.28, "end": 130.44, "text": " assignment, which is a prominent problem in reinforcement learning."}, {"start": 130.44, "end": 135.48000000000002, "text": " Credit assignment is very tricky to solve because, for instance, when we play a strategy"}, {"start": 135.48000000000002, "end": 141.96, "text": " game, we need to make a long sequence of strategic decisions and in the end, if we lose an hour"}, {"start": 141.96, "end": 147.92000000000002, "text": " later, we have to figure out which one of these many many decisions led to our loss."}, {"start": 147.92000000000002, "end": 152.96, "text": " Measuring this as one of the core capabilities was, in my opinion, a great design decision"}, {"start": 152.96, "end": 153.96, "text": " here."}, {"start": 153.96, "end": 159.12, "text": " How well the algorithm scales to larger problems also gets a spot as one of these core"}, {"start": 159.12, "end": 160.12, "text": " capabilities."}, {"start": 160.12, "end": 165.52, "text": " I hope this testing suite will see widespread adoption in reinforcement learning research"}, {"start": 165.52, "end": 170.92000000000002, "text": " and what I am really looking forward to is seeing these radar plots for newer algorithms"}, {"start": 170.92, "end": 175.39999999999998, "text": " which will quickly reveal whether we have a new method that takes a different trade-off"}, {"start": 175.39999999999998, "end": 181.44, "text": " than previous methods or, in other words, has the same area within the polygon but with"}, {"start": 181.44, "end": 187.04, "text": " a different shape or, in the case of a real breakthrough, the area of these polygons"}, {"start": 187.04, "end": 188.88, "text": " will start to increase."}, {"start": 188.88, "end": 194.79999999999998, "text": " Luckily, a few of these charts are already available in the paper and they give us so much information"}, {"start": 194.79999999999998, "end": 195.79999999999998, "text": " about these methods."}, {"start": 195.8, "end": 201.48000000000002, "text": " I could stare at them all day long and I cannot wait to see some newer methods appear here."}, {"start": 201.48000000000002, "end": 204.48000000000002, "text": " Now, note that there is a lot more to this paper."}, {"start": 204.48000000000002, "end": 208.72, "text": " If you have a look at it in the video description, you will also find the experiments that are"}, {"start": 208.72, "end": 214.60000000000002, "text": " part of this suite, what makes a good environment to test these agents in, and that they plan"}, {"start": 214.60000000000002, "end": 219.20000000000002, "text": " to form a committee of prominent researchers to periodically review it."}, {"start": 219.20000000000002, "end": 220.8, "text": " I love that part."}, {"start": 220.8, "end": 224.52, "text": " If you enjoyed this video, please consider supporting us on Patreon."}, {"start": 224.52, "end": 228.88000000000002, "text": " If you do, we can offer you early access to these videos so you can watch them before"}, {"start": 228.88000000000002, "end": 234.16000000000003, "text": " anyone else or you can also get your name immortalized in the video description."}, {"start": 234.16000000000003, "end": 236.84, "text": " Just click the link in the description if you wish to chip in."}, {"start": 236.84, "end": 263.6, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=uVC5WowQxD8
This is How You Simulate Making Pasta 🍜
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📷 Check us out on Instagram: https://www.instagram.com/twominutepapers/ 📝 The paper "A Multi-Scale Model for Coupling Strands with Shear-Dependent Liquid " is available here: http://www.cs.columbia.edu/cg/creamystrand/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karolina Ifahir. Fluid simulation is a mature research field within computer graphics, with amazing papers that show us how to simulate water flows with lots of debris, how to perform liquid fabric interactions, and more. This new project further improves the quality of these works and shows us how thin elastic strands interact with oil paint, mud, melted chocolate, and pasta sauce. There will be plenty of tasty and messy simulations ahead, not necessarily in that order, so make sure to hold onto your papers just in case. Here you see four scenarios of these different materials dripping off of a thin strand. So, why are these cases difficult to simulate? The reason why it's difficult, if not flat out impossible because the hair strands and the fluid layers are so thin, it would require a simulation grid that is so microscopic, or in other words, we would have to perform our computations of quantities like pressure and velocity on so many grid points, it would probably take not from hours to days, but from weeks to years to compute. I will show you a table in a moment where you will see that these amazingly detailed simulations can be done on a grid of surprisingly low resolution. As a result, our simulations also needn't be so tiny in scale with one hair strand and a few drops of mud or water. They can be done on a much larger scale so we can marvel together at least tasty and messy simulations you decide which is which. I particularly like this animation with the oyster sauce because you can see a breakdown of the individual elements of the simulation. Note that all of the interactions between the noodles, the sauce, the fork and plate have to be simulated with precision. Love it. And now the promised table. Here you can see the delta X that means how fine the grid resolution is, which is in the order of centimeters and not micrometers. Please be assured and don't forget that this work is an extension to the material point method which is a hybrid simulation method that both uses grids and particles. And sure enough, you can see here that it simulates up to tens of millions of particles as well and the fact that the computation times are still only measured in a few minutes per frame is absolutely remarkable. Remember the fact that we can simulate this at all is a miracle. Now, this was run on the processor and the potentially implementation on the graphics card could yield us significant speed ups. So I really hope something like this appears in the near future. Also, make sure to have a look at the paper itself which is outrageously well written. If you wish to see more from this paper, make sure to follow us on Instagram, just search for two minute papers there or click the link in the video description. Now, I am still working as a full-time research scientist at the Technical University of Vienna and we train plenty of neural networks during our projects which requires a lot of computational resources. Every time we have to spend time maintaining these machines, I wish we could use Linode. Linode is the world's largest independent cloud hosting and computing provider. If you feel inspired by these works and you wish to run your experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. To reserve your GPU instance and receive a $20 free credit, visit linode.com slash papers or click the link in the video description and use the promo code papers20 during signup. Give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karolina Ifahir."}, {"start": 4.6000000000000005, "end": 9.92, "text": " Fluid simulation is a mature research field within computer graphics, with amazing papers"}, {"start": 9.92, "end": 16.48, "text": " that show us how to simulate water flows with lots of debris, how to perform liquid fabric"}, {"start": 16.48, "end": 19.0, "text": " interactions, and more."}, {"start": 19.0, "end": 24.44, "text": " This new project further improves the quality of these works and shows us how thin elastic"}, {"start": 24.44, "end": 31.560000000000002, "text": " strands interact with oil paint, mud, melted chocolate, and pasta sauce."}, {"start": 31.560000000000002, "end": 38.160000000000004, "text": " There will be plenty of tasty and messy simulations ahead, not necessarily in that order, so make"}, {"start": 38.160000000000004, "end": 41.24, "text": " sure to hold onto your papers just in case."}, {"start": 41.24, "end": 46.84, "text": " Here you see four scenarios of these different materials dripping off of a thin strand."}, {"start": 46.84, "end": 50.84, "text": " So, why are these cases difficult to simulate?"}, {"start": 50.84, "end": 56.040000000000006, "text": " The reason why it's difficult, if not flat out impossible because the hair strands and"}, {"start": 56.040000000000006, "end": 62.760000000000005, "text": " the fluid layers are so thin, it would require a simulation grid that is so microscopic,"}, {"start": 62.760000000000005, "end": 68.08000000000001, "text": " or in other words, we would have to perform our computations of quantities like pressure"}, {"start": 68.08000000000001, "end": 74.08000000000001, "text": " and velocity on so many grid points, it would probably take not from hours to days, but"}, {"start": 74.08000000000001, "end": 76.72, "text": " from weeks to years to compute."}, {"start": 76.72, "end": 80.88, "text": " I will show you a table in a moment where you will see that these amazingly detailed"}, {"start": 80.88, "end": 85.84, "text": " simulations can be done on a grid of surprisingly low resolution."}, {"start": 85.84, "end": 91.52, "text": " As a result, our simulations also needn't be so tiny in scale with one hair strand and"}, {"start": 91.52, "end": 93.88, "text": " a few drops of mud or water."}, {"start": 93.88, "end": 99.5, "text": " They can be done on a much larger scale so we can marvel together at least tasty and"}, {"start": 99.5, "end": 102.72, "text": " messy simulations you decide which is which."}, {"start": 102.72, "end": 107.92, "text": " I particularly like this animation with the oyster sauce because you can see a breakdown"}, {"start": 107.92, "end": 111.12, "text": " of the individual elements of the simulation."}, {"start": 111.12, "end": 116.64, "text": " Note that all of the interactions between the noodles, the sauce, the fork and plate have"}, {"start": 116.64, "end": 118.96000000000001, "text": " to be simulated with precision."}, {"start": 118.96000000000001, "end": 120.2, "text": " Love it."}, {"start": 120.2, "end": 122.52, "text": " And now the promised table."}, {"start": 122.52, "end": 127.8, "text": " Here you can see the delta X that means how fine the grid resolution is, which is in"}, {"start": 127.8, "end": 131.8, "text": " the order of centimeters and not micrometers."}, {"start": 131.8, "end": 136.60000000000002, "text": " Please be assured and don't forget that this work is an extension to the material point"}, {"start": 136.60000000000002, "end": 143.24, "text": " method which is a hybrid simulation method that both uses grids and particles."}, {"start": 143.24, "end": 148.92000000000002, "text": " And sure enough, you can see here that it simulates up to tens of millions of particles as"}, {"start": 148.92000000000002, "end": 154.20000000000002, "text": " well and the fact that the computation times are still only measured in a few minutes per"}, {"start": 154.20000000000002, "end": 157.24, "text": " frame is absolutely remarkable."}, {"start": 157.24, "end": 161.36, "text": " Remember the fact that we can simulate this at all is a miracle."}, {"start": 161.36, "end": 165.9, "text": " Now, this was run on the processor and the potentially implementation on the graphics"}, {"start": 165.9, "end": 168.96, "text": " card could yield us significant speed ups."}, {"start": 168.96, "end": 172.68, "text": " So I really hope something like this appears in the near future."}, {"start": 172.68, "end": 178.20000000000002, "text": " Also, make sure to have a look at the paper itself which is outrageously well written."}, {"start": 178.20000000000002, "end": 182.68, "text": " If you wish to see more from this paper, make sure to follow us on Instagram, just search"}, {"start": 182.68, "end": 186.92000000000002, "text": " for two minute papers there or click the link in the video description."}, {"start": 186.92, "end": 192.32, "text": " Now, I am still working as a full-time research scientist at the Technical University of Vienna"}, {"start": 192.32, "end": 197.79999999999998, "text": " and we train plenty of neural networks during our projects which requires a lot of computational"}, {"start": 197.79999999999998, "end": 199.16, "text": " resources."}, {"start": 199.16, "end": 204.79999999999998, "text": " Every time we have to spend time maintaining these machines, I wish we could use Linode."}, {"start": 204.79999999999998, "end": 209.44, "text": " Linode is the world's largest independent cloud hosting and computing provider."}, {"start": 209.44, "end": 214.39999999999998, "text": " If you feel inspired by these works and you wish to run your experiments or deploy your"}, {"start": 214.4, "end": 219.68, "text": " already existing works through a simple and reliable hosting service, make sure to join"}, {"start": 219.68, "end": 224.20000000000002, "text": " over 800,000 other happy customers and choose Linode."}, {"start": 224.20000000000002, "end": 231.36, "text": " To reserve your GPU instance and receive a $20 free credit, visit linode.com slash papers"}, {"start": 231.36, "end": 237.20000000000002, "text": " or click the link in the video description and use the promo code papers20 during signup."}, {"start": 237.20000000000002, "end": 238.48000000000002, "text": " Give it a try today."}, {"start": 238.48000000000002, "end": 243.28, "text": " Our thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 243.28, "end": 247.12, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=duo-tHbSdMk
New Face Swapping AI Creates Amazing DeepFakes!
📝 The paper "FSGAN: Subject Agnostic Face Swapping and Reenactment" is available here: https://nirkin.com/fsgan/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepFake #FaceSwap
Dear Fellow Scholars, this is two-minute papers with Karo Zorna Efehir. Recently, we have experienced an abundance of papers on facial reenactment in machine learning research. We talked about a technique by the name Phase to Phase back in 2016, approximately 300 videos ago. It was able to take a video of us and transfer our gestures to a target subject. This was kind of possible at the time with specialized depth cameras until Phase to Phase appeared and took the world by storm as it was able to perform what you see here with a regular consumer camera. However, it only transferred gestures. So of course, scientists were quite excited about the possibility of transferring more than just that. But that would require solving so many more problems. For instance, if we wish to turn the head of the target subject, we may need to visualize regions that we haven't seen in these videos, which also requires an intuitive understanding of hair, the human face, and more. This is quite challenging. So, can this be really done? Well, have a look at this amazing new paper. You see here the left image, this is the source person, the video on the right is the target video, and our task is to transfer not just the gestures, but the pose, gestures, and appearance of the face on the left to the video on the right. And this no method works like magic. Look, it not only works like magic, but pulls it off on a surprisingly large variety of cases, many of which I haven't expected at all. Now, hold on to your papers because this technique was not trained on these subjects, which means that this is the first time it is seeing these people. It has been trained on plenty of people, but not these people. Now, before we look at this example, you are probably saying, well, the occlusions from the microphone will surely throw the algorithm off. Right? Well, let's have a look. Nope, no issues at all. Absolutely amazing. Love it. So, how does this wizardry work exactly? Well, it requires careful coordination between no less than four neural networks, where each of which specializes for a different task. The first tool is a reenactment generator that produces a first estimation of the reenacted face, and the segmentation generator network that creates this colorful image that shows which region in the image corresponds to which facial landmark. These two are then handed over to the third network, the in-painting generator, which fills the rest of the image, and since we have overlapping information, incomes the fourth blending generator to the rescue to combine all this information into our final image. The paper contains a detailed description of each of these networks, so make sure to have a look. And if you do, you will also find that there are plenty of comparisons against previous works. Of course, face to face is one of them, which was already amazing, and you can see how far we've come in only three years. Now, when we try to evaluate such research work, we are curious as to how these individual puzzle pieces, in this case the generator networks, contribute to the final results. Or all of them really needed. What if we remove some of them? Well, this is a good paper, so we can find the answer in Table 2, where all of these components are tested in isolation. The downward and upward arrows show which measure is subject to minimization and maximization, and if we look at this column, it is quite clear that all of them indeed improve the situation and contribute to the final results. And remember, all this from just one image of the source person. Insanity. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zorna Efehir."}, {"start": 4.4, "end": 10.8, "text": " Recently, we have experienced an abundance of papers on facial reenactment in machine learning research."}, {"start": 10.8, "end": 18.6, "text": " We talked about a technique by the name Phase to Phase back in 2016, approximately 300 videos ago."}, {"start": 18.6, "end": 24.400000000000002, "text": " It was able to take a video of us and transfer our gestures to a target subject."}, {"start": 24.4, "end": 32.8, "text": " This was kind of possible at the time with specialized depth cameras until Phase to Phase appeared and took the world by storm"}, {"start": 32.8, "end": 38.0, "text": " as it was able to perform what you see here with a regular consumer camera."}, {"start": 38.0, "end": 41.0, "text": " However, it only transferred gestures."}, {"start": 41.0, "end": 48.0, "text": " So of course, scientists were quite excited about the possibility of transferring more than just that."}, {"start": 48.0, "end": 55.6, "text": " But that would require solving so many more problems. For instance, if we wish to turn the head of the target subject,"}, {"start": 55.6, "end": 63.2, "text": " we may need to visualize regions that we haven't seen in these videos, which also requires an intuitive understanding of hair,"}, {"start": 63.2, "end": 65.4, "text": " the human face, and more."}, {"start": 65.4, "end": 67.6, "text": " This is quite challenging."}, {"start": 67.6, "end": 70.6, "text": " So, can this be really done?"}, {"start": 70.6, "end": 73.8, "text": " Well, have a look at this amazing new paper."}, {"start": 73.8, "end": 80.39999999999999, "text": " You see here the left image, this is the source person, the video on the right is the target video,"}, {"start": 80.39999999999999, "end": 91.2, "text": " and our task is to transfer not just the gestures, but the pose, gestures, and appearance of the face on the left to the video on the right."}, {"start": 91.2, "end": 94.8, "text": " And this no method works like magic."}, {"start": 94.8, "end": 101.2, "text": " Look, it not only works like magic, but pulls it off on a surprisingly large variety of cases,"}, {"start": 101.2, "end": 104.4, "text": " many of which I haven't expected at all."}, {"start": 104.4, "end": 113.8, "text": " Now, hold on to your papers because this technique was not trained on these subjects, which means that this is the first time it is seeing these people."}, {"start": 113.8, "end": 117.80000000000001, "text": " It has been trained on plenty of people, but not these people."}, {"start": 117.80000000000001, "end": 126.4, "text": " Now, before we look at this example, you are probably saying, well, the occlusions from the microphone will surely throw the algorithm off."}, {"start": 126.4, "end": 127.4, "text": " Right?"}, {"start": 127.4, "end": 132.4, "text": " Well, let's have a look."}, {"start": 132.4, "end": 135.0, "text": " Nope, no issues at all."}, {"start": 135.0, "end": 136.6, "text": " Absolutely amazing."}, {"start": 136.6, "end": 137.6, "text": " Love it."}, {"start": 137.6, "end": 140.6, "text": " So, how does this wizardry work exactly?"}, {"start": 140.6, "end": 146.8, "text": " Well, it requires careful coordination between no less than four neural networks,"}, {"start": 146.8, "end": 150.6, "text": " where each of which specializes for a different task."}, {"start": 150.6, "end": 166.2, "text": " The first tool is a reenactment generator that produces a first estimation of the reenacted face, and the segmentation generator network that creates this colorful image that shows which region in the image corresponds to which facial landmark."}, {"start": 166.2, "end": 173.0, "text": " These two are then handed over to the third network, the in-painting generator, which fills the rest of the image,"}, {"start": 173.0, "end": 183.4, "text": " and since we have overlapping information, incomes the fourth blending generator to the rescue to combine all this information into our final image."}, {"start": 183.4, "end": 189.6, "text": " The paper contains a detailed description of each of these networks, so make sure to have a look."}, {"start": 189.6, "end": 195.1, "text": " And if you do, you will also find that there are plenty of comparisons against previous works."}, {"start": 195.1, "end": 203.5, "text": " Of course, face to face is one of them, which was already amazing, and you can see how far we've come in only three years."}, {"start": 203.5, "end": 215.0, "text": " Now, when we try to evaluate such research work, we are curious as to how these individual puzzle pieces, in this case the generator networks, contribute to the final results."}, {"start": 215.0, "end": 217.4, "text": " Or all of them really needed."}, {"start": 217.4, "end": 219.4, "text": " What if we remove some of them?"}, {"start": 219.4, "end": 227.5, "text": " Well, this is a good paper, so we can find the answer in Table 2, where all of these components are tested in isolation."}, {"start": 227.5, "end": 242.6, "text": " The downward and upward arrows show which measure is subject to minimization and maximization, and if we look at this column, it is quite clear that all of them indeed improve the situation and contribute to the final results."}, {"start": 242.6, "end": 247.3, "text": " And remember, all this from just one image of the source person."}, {"start": 247.3, "end": 252.8, "text": " Insanity. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=qkHK1QdQ2Fk
This AI Clears Up Your Hazy Photos
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers 📝 The paper "Double-DIP: Unsupervised Image Decomposition via Coupled Deep-Image-Priors" is available here: http://www.wisdom.weizmann.ac.il/~vision/DoubleDIP/ https://github.com/yossigandelsman/DoubleDIP ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Kato Jolene Fahir. Today we are going to talk about a paper that builds on a previous work by the name Deep Image Priors, Deep Inshort. This work was capable of performing JPEG compression artifact removal, image impainting, or in other words, filling in parts of the image with data that makes sense, super resolution, and image denoising. It was quite the package. This new method is able to subdivide an image into a collection of layers, which makes it capable of doing many seemingly unrelated tasks. For instance, one, it can do image segmentation, which typically means producing a mask that shows us the boundaries between the foreground and the background. As an additional advantage, it can also do this for videos as well. Two, it can perform the hazing, which can also be thought of as a decomposition task where the input is one image, and the output is an image with haze, and one with the objects hiding behind the haze. If you spend a tiny bit of time looking at the window on a hazey day, you will immediately see that this is immensely difficult, mostly because of the fact that the amount of haze that we see is non-uniform along the landscape. The AI has to detect and remove just the right amount of this haze and recover the original colors of the image. And three, it can also subdivide these crazy examples where two images are blended together. In a moment, I'll show you a better example with a complex texture where it is easier to see the utility of such a technique. And four, of course, it can also perform image-in-painting, which, for instance, can help us remove watermarks or other unwanted artifacts from our photos. This case can also be thought of as an image layer, plus a watermark layer, and the algorithm is able to recover both of them. As you see here on the right, a tiny part of the content seems to bleed into the watermark layer, but the results are still amazing. It does this by using multiple of these dips, deep image prior networks, and goes by the name DoubleDip. That one got me good when I first seen it. You see here, how it tries to reproduce this complex textured pattern as a sum of these two much simpler individual components. The supplementary materials are available right in your browser and show you a ton of comparisons against other previous works. Here you see the results of these earlier works on image-dhasing and see that indeed the new results are second to none. And all this progress within only two years. What a time to be alive. If like me, you'll have information theory. Woohoo! Make sure to have a look at the paper and you'll be a happy person. This episode has been supported by weights and biases. Weight and biases provides tools to track your experiments in your deep learning projects. It is like a shared logbook for your team, and with this, you can compare your own experiment results, put them next to what your colleagues did, and you can discuss your successes and failures much easier. It takes less than five minutes to set up and is being used by OpenAI, Toyota Research, Stanford, and Berkeley. It was also used in this OpenAI project that you see here, which we covered earlier in the series. They reported that experiment tracking was crucial in this project, and that this tool saved them quite a bit of time and money. If only I had access to such a tool during our last research project where I had to compare the performance of neural networks for months and months. Well, it turns out I will be able to get access to these tools because get this, it's free, and will always be free for academics and open source projects. Make sure to visit them through whendb.com slash papers, wamdb.com slash papers, or just click the link in the video description and sign up for a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Kato Jolene Fahir."}, {"start": 4.4, "end": 12.4, "text": " Today we are going to talk about a paper that builds on a previous work by the name Deep Image Priors, Deep Inshort."}, {"start": 12.4, "end": 19.84, "text": " This work was capable of performing JPEG compression artifact removal, image impainting, or in other words,"}, {"start": 19.84, "end": 28.0, "text": " filling in parts of the image with data that makes sense, super resolution, and image denoising."}, {"start": 28.0, "end": 34.56, "text": " It was quite the package. This new method is able to subdivide an image into a collection of layers,"}, {"start": 34.56, "end": 38.88, "text": " which makes it capable of doing many seemingly unrelated tasks."}, {"start": 38.88, "end": 48.56, "text": " For instance, one, it can do image segmentation, which typically means producing a mask that shows us the boundaries between the foreground and the background."}, {"start": 48.56, "end": 52.64, "text": " As an additional advantage, it can also do this for videos as well."}, {"start": 52.64, "end": 60.4, "text": " Two, it can perform the hazing, which can also be thought of as a decomposition task where the input is one image,"}, {"start": 60.4, "end": 66.4, "text": " and the output is an image with haze, and one with the objects hiding behind the haze."}, {"start": 66.4, "end": 74.48, "text": " If you spend a tiny bit of time looking at the window on a hazey day, you will immediately see that this is immensely difficult,"}, {"start": 74.48, "end": 80.4, "text": " mostly because of the fact that the amount of haze that we see is non-uniform along the landscape."}, {"start": 80.4, "end": 88.64, "text": " The AI has to detect and remove just the right amount of this haze and recover the original colors of the image."}, {"start": 88.64, "end": 95.44000000000001, "text": " And three, it can also subdivide these crazy examples where two images are blended together."}, {"start": 95.44000000000001, "end": 102.88000000000001, "text": " In a moment, I'll show you a better example with a complex texture where it is easier to see the utility of such a technique."}, {"start": 102.88, "end": 114.08, "text": " And four, of course, it can also perform image-in-painting, which, for instance, can help us remove watermarks or other unwanted artifacts from our photos."}, {"start": 114.08, "end": 122.64, "text": " This case can also be thought of as an image layer, plus a watermark layer, and the algorithm is able to recover both of them."}, {"start": 122.64, "end": 131.12, "text": " As you see here on the right, a tiny part of the content seems to bleed into the watermark layer, but the results are still amazing."}, {"start": 131.12, "end": 138.64000000000001, "text": " It does this by using multiple of these dips, deep image prior networks, and goes by the name DoubleDip."}, {"start": 138.64000000000001, "end": 141.28, "text": " That one got me good when I first seen it."}, {"start": 141.28, "end": 150.4, "text": " You see here, how it tries to reproduce this complex textured pattern as a sum of these two much simpler individual components."}, {"start": 150.4, "end": 158.96, "text": " The supplementary materials are available right in your browser and show you a ton of comparisons against other previous works."}, {"start": 158.96, "end": 166.88, "text": " Here you see the results of these earlier works on image-dhasing and see that indeed the new results are second to none."}, {"start": 166.88, "end": 172.08, "text": " And all this progress within only two years. What a time to be alive."}, {"start": 172.08, "end": 174.96, "text": " If like me, you'll have information theory."}, {"start": 174.96, "end": 179.44, "text": " Woohoo! Make sure to have a look at the paper and you'll be a happy person."}, {"start": 179.44, "end": 182.84, "text": " This episode has been supported by weights and biases."}, {"start": 182.84, "end": 193.48000000000002, "text": " Weight and biases provides tools to track your experiments in your deep learning projects. It is like a shared logbook for your team, and with this, you can compare your own experiment results,"}, {"start": 193.48000000000002, "end": 199.4, "text": " put them next to what your colleagues did, and you can discuss your successes and failures much easier."}, {"start": 199.4, "end": 207.08, "text": " It takes less than five minutes to set up and is being used by OpenAI, Toyota Research, Stanford, and Berkeley."}, {"start": 207.08, "end": 212.76000000000002, "text": " It was also used in this OpenAI project that you see here, which we covered earlier in the series."}, {"start": 212.76000000000002, "end": 220.84, "text": " They reported that experiment tracking was crucial in this project, and that this tool saved them quite a bit of time and money."}, {"start": 220.84, "end": 229.72000000000003, "text": " If only I had access to such a tool during our last research project where I had to compare the performance of neural networks for months and months."}, {"start": 229.72, "end": 239.96, "text": " Well, it turns out I will be able to get access to these tools because get this, it's free, and will always be free for academics and open source projects."}, {"start": 239.96, "end": 252.12, "text": " Make sure to visit them through whendb.com slash papers, wamdb.com slash papers, or just click the link in the video description and sign up for a free demo today."}, {"start": 252.12, "end": 260.28000000000003, "text": " Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=AOZw1tgD8dA
Adversarial Attacks on Neural Networks - Bug or Feature?
❤️ Support us on Patreon: https://www.patreon.com/TwoMinutePapers 📝 The paper "Adversarial Examples Are Not Bugs, They Are Features" is available here: http://gradientscience.org/adv/ The Distill discussion article is available here: https://distill.pub/2019/advex-bugs-discussion/ If you wish to play with some of these Distill articles, look here: - https://distill.pub/2017/feature-visualization/ - https://distill.pub/2018/building-blocks/ Andrej Karpathy’s image classifier - you can run this in your web browser: https://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-2981865/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fahir. This will be a little non-traditional video where the first half of the episode will be about a paper and the second part will be about something else. Also a paper. Well, kind of. You'll see. We've seen in the previous years that neural network-based learning methods are amazing at image classification, which means that after training on a few thousand training examples, they can look at a new previously unseen image and tell us whether it depicts a frog or a bus. Earlier we have shown that we can fool neural networks by adding carefully crafted noise to an image which we often refer to as an adversarial attack on a neural network. If done well, this noise is barely perceptible and, get this, can fool the classifier into looking at a bus and thinking that it is an ostrich. These attacks typically require modifying a large portion of the input image, so when talking about a later paper, we were thinking what could be the lowest number of pixel changes that we have to perform to fool a neural network. What is the magic number? Based on the results of previous research works, an educated guess would be somewhere around a hundred pixels. A follow-up paper gave us an unbelievable answer by demonstrating the one pixel attack. You see here that by changing only one pixel in an image that depicts a horse, the AI will be 99.9% sure that we are seeing a frog. A ship can also be disguised as a car or, amusingly, with a properly executed one pixel attack almost anything can be seen as an airplane by the neural network. And this new paper discusses whether we should look at these adversarial examples as bugs or not, and of course, does a lot more than that. It argues that most data sets contain features that are predictive, meaning that they provide help for a classifier to find cats, but also non-robust, which means that they provide a rather brittle understanding that falls apart in the presence of adversarial changes. We are also shown how to find and eliminate these non-robust features from already existing data sets and that we can build much more robust classifier neural networks as a result. This is a truly excellent paper that sparked quite a bit of discussion. And here comes the second part of the video with the something else. An interesting new article was published within the distal journal, a journal where you can expect clearly-warded papers with beautiful and interactive visualizations. But this is no ordinary article, this is a so-called discussion article where a number of researchers were asked to write comments on this paper and create interesting back-and-forth discussions with the original authors. Now, make no mistake, the paper we've talked about was peer-reviewed, which means that independent experts have spent time scrutinizing the validity of the results, so this new discussion article was meant to add to it by getting others to replicate the results and clear up potential misunderstandings. Through publishing six of these mini-discussions, each of which were addressed by the original authors, they were able to clarify the main takeaways of the paper and even added a section of non-claims as well. For instance, it's been clarified that they don't claim that adversarial examples arise from software bugs. A huge thanks to the distal journal and all the authors who participated in this discussion and Ferenc Hussar who suggested the idea of the discussion article to the journal. I'd love to see more of this and if you do too, make sure to leave a comment so we can show them that these endeavors to raise the replicability and clarity of research works are indeed welcome. Make sure to click the link to both works in the video description and spend a little quality time with them. You'll be glad you did. I think this was a more complex than average paper to talk about, however, as you have noticed, the visual fireworks were not there. As a result, I expect this to get significantly fewer views. That's not a great business model, but no matter, I made this channel so I can share with you all these important lessons that I learned during my journey. This has been a true privilege and I am thrilled that I am still able to talk about all these amazing papers without worrying too much whether any of these videos go viral or not. Things like this are only possible because of your support on patreon.com slash two minute papers. If you feel like chipping in, just click the Patreon link in the video description. This is why every video ends with, you know what's coming. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai-Fahir."}, {"start": 4.64, "end": 9.120000000000001, "text": " This will be a little non-traditional video where the first half of the episode will be"}, {"start": 9.120000000000001, "end": 13.92, "text": " about a paper and the second part will be about something else."}, {"start": 13.92, "end": 15.32, "text": " Also a paper."}, {"start": 15.32, "end": 16.84, "text": " Well, kind of."}, {"start": 16.84, "end": 17.84, "text": " You'll see."}, {"start": 17.84, "end": 22.28, "text": " We've seen in the previous years that neural network-based learning methods are amazing"}, {"start": 22.28, "end": 26.96, "text": " at image classification, which means that after training on a few thousand training"}, {"start": 26.96, "end": 32.6, "text": " examples, they can look at a new previously unseen image and tell us whether it depicts"}, {"start": 32.6, "end": 34.92, "text": " a frog or a bus."}, {"start": 34.92, "end": 40.160000000000004, "text": " Earlier we have shown that we can fool neural networks by adding carefully crafted noise"}, {"start": 40.160000000000004, "end": 45.480000000000004, "text": " to an image which we often refer to as an adversarial attack on a neural network."}, {"start": 45.480000000000004, "end": 52.120000000000005, "text": " If done well, this noise is barely perceptible and, get this, can fool the classifier into"}, {"start": 52.120000000000005, "end": 56.72, "text": " looking at a bus and thinking that it is an ostrich."}, {"start": 56.72, "end": 61.68, "text": " These attacks typically require modifying a large portion of the input image, so when"}, {"start": 61.68, "end": 67.92, "text": " talking about a later paper, we were thinking what could be the lowest number of pixel changes"}, {"start": 67.92, "end": 71.28, "text": " that we have to perform to fool a neural network."}, {"start": 71.28, "end": 73.4, "text": " What is the magic number?"}, {"start": 73.4, "end": 78.28, "text": " Based on the results of previous research works, an educated guess would be somewhere around"}, {"start": 78.28, "end": 79.28, "text": " a hundred pixels."}, {"start": 79.28, "end": 86.24, "text": " A follow-up paper gave us an unbelievable answer by demonstrating the one pixel attack."}, {"start": 86.24, "end": 92.11999999999999, "text": " You see here that by changing only one pixel in an image that depicts a horse, the AI will"}, {"start": 92.11999999999999, "end": 96.75999999999999, "text": " be 99.9% sure that we are seeing a frog."}, {"start": 96.75999999999999, "end": 103.88, "text": " A ship can also be disguised as a car or, amusingly, with a properly executed one pixel attack"}, {"start": 103.88, "end": 108.72, "text": " almost anything can be seen as an airplane by the neural network."}, {"start": 108.72, "end": 113.67999999999999, "text": " And this new paper discusses whether we should look at these adversarial examples as bugs"}, {"start": 113.68, "end": 117.48, "text": " or not, and of course, does a lot more than that."}, {"start": 117.48, "end": 122.80000000000001, "text": " It argues that most data sets contain features that are predictive, meaning that they provide"}, {"start": 122.80000000000001, "end": 129.0, "text": " help for a classifier to find cats, but also non-robust, which means that they provide"}, {"start": 129.0, "end": 134.92000000000002, "text": " a rather brittle understanding that falls apart in the presence of adversarial changes."}, {"start": 134.92000000000002, "end": 140.64000000000001, "text": " We are also shown how to find and eliminate these non-robust features from already existing"}, {"start": 140.64, "end": 146.39999999999998, "text": " data sets and that we can build much more robust classifier neural networks as a result."}, {"start": 146.39999999999998, "end": 151.11999999999998, "text": " This is a truly excellent paper that sparked quite a bit of discussion."}, {"start": 151.11999999999998, "end": 155.04, "text": " And here comes the second part of the video with the something else."}, {"start": 155.04, "end": 159.48, "text": " An interesting new article was published within the distal journal, a journal where you"}, {"start": 159.48, "end": 165.11999999999998, "text": " can expect clearly-warded papers with beautiful and interactive visualizations."}, {"start": 165.11999999999998, "end": 170.44, "text": " But this is no ordinary article, this is a so-called discussion article where a number"}, {"start": 170.44, "end": 175.72, "text": " of researchers were asked to write comments on this paper and create interesting back-and-forth"}, {"start": 175.72, "end": 178.4, "text": " discussions with the original authors."}, {"start": 178.4, "end": 183.92, "text": " Now, make no mistake, the paper we've talked about was peer-reviewed, which means that"}, {"start": 183.92, "end": 190.32, "text": " independent experts have spent time scrutinizing the validity of the results, so this new discussion"}, {"start": 190.32, "end": 196.28, "text": " article was meant to add to it by getting others to replicate the results and clear up potential"}, {"start": 196.28, "end": 197.84, "text": " misunderstandings."}, {"start": 197.84, "end": 202.52, "text": " Through publishing six of these mini-discussions, each of which were addressed by the original"}, {"start": 202.52, "end": 209.24, "text": " authors, they were able to clarify the main takeaways of the paper and even added a section"}, {"start": 209.24, "end": 211.24, "text": " of non-claims as well."}, {"start": 211.24, "end": 216.44, "text": " For instance, it's been clarified that they don't claim that adversarial examples arise"}, {"start": 216.44, "end": 217.68, "text": " from software bugs."}, {"start": 217.68, "end": 223.36, "text": " A huge thanks to the distal journal and all the authors who participated in this discussion"}, {"start": 223.36, "end": 228.16000000000003, "text": " and Ferenc Hussar who suggested the idea of the discussion article to the journal."}, {"start": 228.16000000000003, "end": 232.48000000000002, "text": " I'd love to see more of this and if you do too, make sure to leave a comment so we can"}, {"start": 232.48000000000002, "end": 238.04000000000002, "text": " show them that these endeavors to raise the replicability and clarity of research works"}, {"start": 238.04000000000002, "end": 239.56, "text": " are indeed welcome."}, {"start": 239.56, "end": 243.44000000000003, "text": " Make sure to click the link to both works in the video description and spend a little"}, {"start": 243.44000000000003, "end": 245.0, "text": " quality time with them."}, {"start": 245.0, "end": 246.44000000000003, "text": " You'll be glad you did."}, {"start": 246.44000000000003, "end": 252.16000000000003, "text": " I think this was a more complex than average paper to talk about, however, as you have noticed,"}, {"start": 252.16, "end": 254.56, "text": " the visual fireworks were not there."}, {"start": 254.56, "end": 258.28, "text": " As a result, I expect this to get significantly fewer views."}, {"start": 258.28, "end": 263.44, "text": " That's not a great business model, but no matter, I made this channel so I can share with"}, {"start": 263.44, "end": 267.76, "text": " you all these important lessons that I learned during my journey."}, {"start": 267.76, "end": 272.4, "text": " This has been a true privilege and I am thrilled that I am still able to talk about all these"}, {"start": 272.4, "end": 278.44, "text": " amazing papers without worrying too much whether any of these videos go viral or not."}, {"start": 278.44, "end": 283.6, "text": " Things like this are only possible because of your support on patreon.com slash two"}, {"start": 283.6, "end": 284.8, "text": " minute papers."}, {"start": 284.8, "end": 289.28, "text": " If you feel like chipping in, just click the Patreon link in the video description."}, {"start": 289.28, "end": 292.8, "text": " This is why every video ends with, you know what's coming."}, {"start": 292.8, "end": 320.24, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-ryF7237gNo
This Adorable Baby T-Rex AI Learned To Dribble 🦖
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "MCP: Learning Composable Hierarchical Control with Multiplicative Compositional Policies" is available here: https://xbpeng.github.io/projects/MCP/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karu Zonai-Fehir, about 350 episodes ago in this series, in episode number 8, we talked about an amazing paper in which researchers built virtual characters with a bunch of muscles and joints and through the power of machine learning taught them to actuate them just the right way so that they could learn to walk. Well, some of them anyway. Later, we've seen much more advanced variants where we could even teach them to lift weights, jump really high, or even observe how their movements would change after they undergo surgery. This paper is a huge step forward in this area and if you look at the title, it says that it proposes multiplicative composition policies to control these characters. What this means is that these complex actions are broken down into a sum of elementary movements. Intuitively, you can imagine something similar when you see a child use small, simple Lego pieces to build a huge, breathtaking spaceship. That sounds great, but what does this do for us? Well, the ability to properly combine these Lego pieces is where the learning part of the technique shines and you can see on the right that these individual Lego pieces are as amusing as useless if they are not combined with others. To assemble efficient combinations that are actually useful, the characters are first required to learn to perform reference motions using combinations of these Lego pieces. Here on the right, the blue bars show which of these Lego pieces are used and when in the current movement pattern. Now, with heard enough of these Legos, what is this whole compositional thing good for? Well, a key advantage of using these is that they are simple enough so that they can be transferred and reused for other types of movement. As you see here, this footage demonstrates how we can teach a biped or even a T-Rex to carry and stack boxes or how to dribble or how to score a goal. Amusingly, according to the paper, it seems that this T-Rex weighs only 55 kilograms or 121 pounds, an adorable baby T-Rex, if you will. As a result of this transferability property, when we assemble a new agent or wish to teach an already existing character some new moves, we don't have to train them from scratch as they already have access to these Lego pieces. I love seeing all these new papers in the intersection of computer graphics and machine learning. This is a similar topic to what I am working on as a full-time research scientist at the Technical University of Vienna and in these projects we train plenty of neural networks which requires a lot of computational resources. Sometimes when we have to spend time maintaining the machines running these networks, I wish we could use Linode. Linode is the world's largest independent cloud hosting and computing provider and they have GPU instances that are tailor-made for AI, scientific computing and computer graphics projects. If you feel inspired by these works and you wish to run your experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. To reserve your GPU instance and receive a $20 free credit, visit Linode.com slash papers or click the link in the video description and use the promo code papers20 during sign up. Give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 6.2, "text": " Dear Fellow Scholars, this is two-minute papers with Karu Zonai-Fehir, about 350 episodes"}, {"start": 6.2, "end": 12.76, "text": " ago in this series, in episode number 8, we talked about an amazing paper in which researchers"}, {"start": 12.76, "end": 18.2, "text": " built virtual characters with a bunch of muscles and joints and through the power of machine"}, {"start": 18.2, "end": 24.48, "text": " learning taught them to actuate them just the right way so that they could learn to walk."}, {"start": 24.48, "end": 30.4, "text": " Well, some of them anyway. Later, we've seen much more advanced variants where we could"}, {"start": 30.4, "end": 37.56, "text": " even teach them to lift weights, jump really high, or even observe how their movements would"}, {"start": 37.56, "end": 43.8, "text": " change after they undergo surgery. This paper is a huge step forward in this area and if"}, {"start": 43.8, "end": 49.6, "text": " you look at the title, it says that it proposes multiplicative composition policies to control"}, {"start": 49.6, "end": 55.32, "text": " these characters. What this means is that these complex actions are broken down into a sum"}, {"start": 55.32, "end": 60.28, "text": " of elementary movements. Intuitively, you can imagine something similar when you see"}, {"start": 60.28, "end": 67.28, "text": " a child use small, simple Lego pieces to build a huge, breathtaking spaceship. That sounds"}, {"start": 67.28, "end": 72.44, "text": " great, but what does this do for us? Well, the ability to properly combine these Lego"}, {"start": 72.44, "end": 77.72, "text": " pieces is where the learning part of the technique shines and you can see on the right that"}, {"start": 77.72, "end": 84.12, "text": " these individual Lego pieces are as amusing as useless if they are not combined with others."}, {"start": 84.12, "end": 89.32, "text": " To assemble efficient combinations that are actually useful, the characters are first"}, {"start": 89.32, "end": 95.96000000000001, "text": " required to learn to perform reference motions using combinations of these Lego pieces. Here"}, {"start": 95.96000000000001, "end": 101.52, "text": " on the right, the blue bars show which of these Lego pieces are used and when in the current"}, {"start": 101.52, "end": 106.96000000000001, "text": " movement pattern. Now, with heard enough of these Legos, what is this whole compositional"}, {"start": 106.96, "end": 112.63999999999999, "text": " thing good for? Well, a key advantage of using these is that they are simple enough so that"}, {"start": 112.63999999999999, "end": 118.52, "text": " they can be transferred and reused for other types of movement. As you see here, this footage"}, {"start": 118.52, "end": 125.03999999999999, "text": " demonstrates how we can teach a biped or even a T-Rex to carry and stack boxes or how"}, {"start": 125.03999999999999, "end": 131.68, "text": " to dribble or how to score a goal. Amusingly, according to the paper, it seems that this"}, {"start": 131.68, "end": 140.48000000000002, "text": " T-Rex weighs only 55 kilograms or 121 pounds, an adorable baby T-Rex, if you will. As a result"}, {"start": 140.48000000000002, "end": 146.16, "text": " of this transferability property, when we assemble a new agent or wish to teach an already"}, {"start": 146.16, "end": 150.92000000000002, "text": " existing character some new moves, we don't have to train them from scratch as they already"}, {"start": 150.92000000000002, "end": 155.96, "text": " have access to these Lego pieces. I love seeing all these new papers in the intersection"}, {"start": 155.96, "end": 160.56, "text": " of computer graphics and machine learning. This is a similar topic to what I am working"}, {"start": 160.56, "end": 165.08, "text": " on as a full-time research scientist at the Technical University of Vienna and in these"}, {"start": 165.08, "end": 171.12, "text": " projects we train plenty of neural networks which requires a lot of computational resources."}, {"start": 171.12, "end": 175.48, "text": " Sometimes when we have to spend time maintaining the machines running these networks, I wish"}, {"start": 175.48, "end": 181.04, "text": " we could use Linode. Linode is the world's largest independent cloud hosting and computing"}, {"start": 181.04, "end": 187.52, "text": " provider and they have GPU instances that are tailor-made for AI, scientific computing"}, {"start": 187.52, "end": 192.4, "text": " and computer graphics projects. If you feel inspired by these works and you wish to run"}, {"start": 192.4, "end": 197.52, "text": " your experiments or deploy your already existing works through a simple and reliable hosting"}, {"start": 197.52, "end": 204.44, "text": " service, make sure to join over 800,000 other happy customers and choose Linode. To reserve"}, {"start": 204.44, "end": 211.76000000000002, "text": " your GPU instance and receive a $20 free credit, visit Linode.com slash papers or click the"}, {"start": 211.76, "end": 217.67999999999998, "text": " link in the video description and use the promo code papers20 during sign up. Give it a"}, {"start": 217.67999999999998, "end": 222.56, "text": " try today. Our thanks to Linode for supporting the series and helping us make better videos"}, {"start": 222.56, "end": 250.56, "text": " for you. Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Jnj7OmmOm2Y
This AI Hallucinates Images For You
📷 We are now available on Instagram: https://www.instagram.com/twominutepapers/ 📝 The paper "On the steerability of generative adversarial networks" is available here: https://ali-design.github.io/gan_steerability/ The paper "Learning a Manifold of Fonts" and its demo are available here: http://vecg.cs.ucl.ac.uk/Projects/projects_fonts/projects_fonts.html Our material synthesis paper is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. As machine learning research advances over time, learning-based techniques are getting better and better at generating images or even creating videos when given a topic. A few episodes ago, we talked about DeepMise dual video discriminator technique in which multiple neural networks compete against each other teaching our machines to synthesize a collection of two second-long videos. One of the key advantages of this method was that it learned the concept of changes in the camera view, zooming in on an object, and understood that if someone draws something with a pen, the ink has to remain on the paper unchanged. However, generally, if we wish to ask an AI to synthesize assets for us, we likely have an exact idea of what we are looking for. In these cases, we are looking for a little more artistic control than this technique offers us. So, can we get around this? If so, how? Well, we can. I'll tell you how in a moment, but to understand this solution, we first have to have a firm grasp on the concept of latent spaces. You can think of a latent space as a compressed representation that tries to capture the essence of the dataset that we have at hand. You can see a similar latent space method in action here that sets different kinds of font support and presents these options on a 2D plane. And here, you see our technique that builds a latent space for modeling a wide range of photorealistic material models that we can explore. And now, onto this new work. What this tries to do is find a path in the latent space of these images that relates to intuitive concepts like camera zooming, rotation, or shifting. That's not an easy task, but if we pull it off, we'll have more artistic control over these generated images, which will be immensely useful for many creative tasks. This new work can perform that, and not only that, but it is also able to learn the concept of color enhancement and can even increase or decrease the contrast of these images. The key idea of this paper is that this can be done through trying to find crazy, nonlinear trajectories in these latent spaces that happen to relate to these intuitive concepts. It is not perfect in a sense that we can indeed zoom in on the picture of this dog, but the posture of the dog also changes and it even seems like we are starting out with a puppy that grows up frame by frame. This means that we have learned to navigate this latent space, but there is still some additional fat in these movements, which is a typical side effect of latent space-based techniques, and also don't forget that the training data the AI is given also has its own limits. However, as you see, we are now one step closer to not only having an AI that synthesizes images for us, but one that does it exactly with the camera setup, rotation, and colors that we are looking for. What a time to be alive. If you wish to see beautiful formulations of walks, walks in latent spaces, that is, make sure to have a look at the paper in the video description. Also, note that we have now appeared on Instagram with bite-sized pieces of our bite-sized videos. Yes, it is quite peculiar. Make sure to check it out, just search for two-minute papers on Instagram or click the link in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 7.4, "text": " As machine learning research advances over time,"}, {"start": 7.4, "end": 11.92, "text": " learning-based techniques are getting better and better at generating images"}, {"start": 11.92, "end": 15.0, "text": " or even creating videos when given a topic."}, {"start": 15.0, "end": 19.44, "text": " A few episodes ago, we talked about DeepMise dual video discriminator technique"}, {"start": 19.44, "end": 23.32, "text": " in which multiple neural networks compete against each other"}, {"start": 23.32, "end": 28.240000000000002, "text": " teaching our machines to synthesize a collection of two second-long videos."}, {"start": 28.24, "end": 33.239999999999995, "text": " One of the key advantages of this method was that it learned the concept of changes"}, {"start": 33.239999999999995, "end": 40.239999999999995, "text": " in the camera view, zooming in on an object, and understood that if someone draws something with a pen,"}, {"start": 40.239999999999995, "end": 43.44, "text": " the ink has to remain on the paper unchanged."}, {"start": 43.44, "end": 48.44, "text": " However, generally, if we wish to ask an AI to synthesize assets for us,"}, {"start": 48.44, "end": 52.239999999999995, "text": " we likely have an exact idea of what we are looking for."}, {"start": 52.239999999999995, "end": 57.64, "text": " In these cases, we are looking for a little more artistic control than this technique offers us."}, {"start": 57.64, "end": 61.84, "text": " So, can we get around this? If so, how?"}, {"start": 61.84, "end": 64.64, "text": " Well, we can. I'll tell you how in a moment,"}, {"start": 64.64, "end": 71.24, "text": " but to understand this solution, we first have to have a firm grasp on the concept of latent spaces."}, {"start": 71.24, "end": 74.84, "text": " You can think of a latent space as a compressed representation"}, {"start": 74.84, "end": 79.24000000000001, "text": " that tries to capture the essence of the dataset that we have at hand."}, {"start": 79.24000000000001, "end": 82.44, "text": " You can see a similar latent space method in action here"}, {"start": 82.44, "end": 88.44, "text": " that sets different kinds of font support and presents these options on a 2D plane."}, {"start": 88.44, "end": 92.24, "text": " And here, you see our technique that builds a latent space"}, {"start": 92.24, "end": 98.44, "text": " for modeling a wide range of photorealistic material models that we can explore."}, {"start": 98.44, "end": 100.84, "text": " And now, onto this new work."}, {"start": 100.84, "end": 105.44, "text": " What this tries to do is find a path in the latent space of these images"}, {"start": 105.44, "end": 111.44, "text": " that relates to intuitive concepts like camera zooming, rotation, or shifting."}, {"start": 111.44, "end": 114.24, "text": " That's not an easy task, but if we pull it off,"}, {"start": 114.24, "end": 117.84, "text": " we'll have more artistic control over these generated images,"}, {"start": 117.84, "end": 121.64, "text": " which will be immensely useful for many creative tasks."}, {"start": 121.64, "end": 125.03999999999999, "text": " This new work can perform that, and not only that,"}, {"start": 125.03999999999999, "end": 129.04, "text": " but it is also able to learn the concept of color enhancement"}, {"start": 129.04, "end": 133.84, "text": " and can even increase or decrease the contrast of these images."}, {"start": 133.84, "end": 139.04, "text": " The key idea of this paper is that this can be done through trying to find crazy,"}, {"start": 139.04, "end": 145.44, "text": " nonlinear trajectories in these latent spaces that happen to relate to these intuitive concepts."}, {"start": 145.44, "end": 150.44, "text": " It is not perfect in a sense that we can indeed zoom in on the picture of this dog,"}, {"start": 150.44, "end": 153.44, "text": " but the posture of the dog also changes"}, {"start": 153.44, "end": 159.04, "text": " and it even seems like we are starting out with a puppy that grows up frame by frame."}, {"start": 159.04, "end": 162.04, "text": " This means that we have learned to navigate this latent space,"}, {"start": 162.04, "end": 165.84, "text": " but there is still some additional fat in these movements,"}, {"start": 165.84, "end": 169.24, "text": " which is a typical side effect of latent space-based techniques,"}, {"start": 169.24, "end": 175.04, "text": " and also don't forget that the training data the AI is given also has its own limits."}, {"start": 175.04, "end": 180.64000000000001, "text": " However, as you see, we are now one step closer to not only having an AI"}, {"start": 180.64000000000001, "end": 182.64000000000001, "text": " that synthesizes images for us,"}, {"start": 182.64000000000001, "end": 190.04, "text": " but one that does it exactly with the camera setup, rotation, and colors that we are looking for."}, {"start": 190.04, "end": 191.84, "text": " What a time to be alive."}, {"start": 191.84, "end": 196.84, "text": " If you wish to see beautiful formulations of walks, walks in latent spaces,"}, {"start": 196.84, "end": 200.64000000000001, "text": " that is, make sure to have a look at the paper in the video description."}, {"start": 200.64000000000001, "end": 207.44, "text": " Also, note that we have now appeared on Instagram with bite-sized pieces of our bite-sized videos."}, {"start": 207.44, "end": 209.44, "text": " Yes, it is quite peculiar."}, {"start": 209.44, "end": 213.44, "text": " Make sure to check it out, just search for two-minute papers on Instagram"}, {"start": 213.44, "end": 215.84, "text": " or click the link in the video description."}, {"start": 215.84, "end": 222.84, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=IqHs_DkmDVo
Finally, AI-Based Painting is Here!
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers 📝 The paper "GANPaint Studio - Semantic Photo Manipulation with a Generative Image Prior" and its online demo are available here: http://ganpaint.io/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #GANPaint
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. A few years ago, the Generative Adversarial Network Architecture appeared that contains two neural networks that try to outcompete each other. It has been used extensively for image generation and has become a research subfield of its own. For instance, they can generate faces of people that don't exist and much, much more. This is great, we should be grateful to live in a time when breakthroughs like this happen in AI research. However, we should also note that artists usually have a vision of the work that they would like to create and instead of just getting a deluge of new images, most of them would prefer to have some sort of artistic control over the results. This work offers something that they call semantic paintbrushes. This means that we can paint not in terms of colors, but in terms of concepts. Now this may sound a little nebulous, so if you look here, you see that as a result, we can grow trees, change buildings, and do all kinds of shenanigans without requiring us to be able to draw the results by hand. Look at those marvelous results. It works by compressing down these images into a latent space. This is a representation that is quite sparse and captures the essence of these images. One of the key ideas is that this can then be reconstructed by a generator neural network to get a similar image back. However, the twist is that while we are in the latent domain, we can apply these intuitive edits to this image, so when the generator step takes place, it will carry through our changes. If you look at the paper, you will see that just using one generator network doesn't yield these great results, therefore, this generator needs to be specific to the image we are currently editing. The included user study shows that the new method is preferred over the previous techniques. Now, like all of these methods, this is not without limitations. Here you see that despite trying to remove the chairs from the scene, amusingly, we get them right back. That's a bunch of chairs, free of charge. In fact, I'm not even sure how many chairs we got here. If you figure that out, make sure to leave a comment about it, but all in all, that's not what we asked for, and solving this remains a challenge for the entire family of these algorithms. And good news. In fact, when talking about a paper, probably the best kind of news is that you can try it online through a web demo right now. Make sure to try it and post your results here if you find anything interesting. The authors themselves may also learn something new from us about interesting new failure cases. It has happened before in this series. This episode has been supported by Wates and Biasis. Wates and Biasis provides tools to track your experiments in your deep learning projects. It is like a shared logbook for your team, and with this, you can compare your own experiment results, put them next to what your colleagues did, and you can discuss your successes and failures much easier. It takes less than 5 minutes to set up, and is being used by OpenAI, Toyota Research, Stanford, and Berkeley. It was also used in this OpenAI project that you see here, which we covered earlier in the series. They reported that experiment tracking was crucial in this project, and that this tool saved them quite a bit of time and money. If only I had access to such a tool during our last research project where I had to compare the performance of neural networks for months and months. Well, it turns out I will be able to get access to these tools because get this, it's free, and will always be free for academics and open source projects. Make sure to visit them through WendeeB.com, slash papers, WendeeB.com slash papers, or just click the link in the video description and sign up for a free demo today. Our thanks to Wates and Biasis for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.0, "end": 10.0, "text": " A few years ago, the Generative Adversarial Network Architecture appeared that contains two"}, {"start": 10.0, "end": 14.0, "text": " neural networks that try to outcompete each other."}, {"start": 14.0, "end": 20.0, "text": " It has been used extensively for image generation and has become a research subfield of its own."}, {"start": 20.0, "end": 25.0, "text": " For instance, they can generate faces of people that don't exist and much, much more."}, {"start": 25.0, "end": 31.0, "text": " This is great, we should be grateful to live in a time when breakthroughs like this happen in AI research."}, {"start": 31.0, "end": 37.0, "text": " However, we should also note that artists usually have a vision of the work that they would like to create"}, {"start": 37.0, "end": 45.0, "text": " and instead of just getting a deluge of new images, most of them would prefer to have some sort of artistic control over the results."}, {"start": 45.0, "end": 49.0, "text": " This work offers something that they call semantic paintbrushes."}, {"start": 49.0, "end": 55.0, "text": " This means that we can paint not in terms of colors, but in terms of concepts."}, {"start": 55.0, "end": 61.0, "text": " Now this may sound a little nebulous, so if you look here, you see that as a result, we can grow trees,"}, {"start": 61.0, "end": 69.0, "text": " change buildings, and do all kinds of shenanigans without requiring us to be able to draw the results by hand."}, {"start": 69.0, "end": 71.0, "text": " Look at those marvelous results."}, {"start": 71.0, "end": 75.0, "text": " It works by compressing down these images into a latent space."}, {"start": 75.0, "end": 81.0, "text": " This is a representation that is quite sparse and captures the essence of these images."}, {"start": 81.0, "end": 88.0, "text": " One of the key ideas is that this can then be reconstructed by a generator neural network to get a similar image back."}, {"start": 88.0, "end": 96.0, "text": " However, the twist is that while we are in the latent domain, we can apply these intuitive edits to this image,"}, {"start": 96.0, "end": 100.0, "text": " so when the generator step takes place, it will carry through our changes."}, {"start": 100.0, "end": 107.0, "text": " If you look at the paper, you will see that just using one generator network doesn't yield these great results,"}, {"start": 107.0, "end": 112.0, "text": " therefore, this generator needs to be specific to the image we are currently editing."}, {"start": 112.0, "end": 118.0, "text": " The included user study shows that the new method is preferred over the previous techniques."}, {"start": 118.0, "end": 122.0, "text": " Now, like all of these methods, this is not without limitations."}, {"start": 122.0, "end": 129.0, "text": " Here you see that despite trying to remove the chairs from the scene, amusingly, we get them right back."}, {"start": 129.0, "end": 132.0, "text": " That's a bunch of chairs, free of charge."}, {"start": 132.0, "end": 135.0, "text": " In fact, I'm not even sure how many chairs we got here."}, {"start": 135.0, "end": 141.0, "text": " If you figure that out, make sure to leave a comment about it, but all in all, that's not what we asked for,"}, {"start": 141.0, "end": 146.0, "text": " and solving this remains a challenge for the entire family of these algorithms."}, {"start": 146.0, "end": 148.0, "text": " And good news."}, {"start": 148.0, "end": 156.0, "text": " In fact, when talking about a paper, probably the best kind of news is that you can try it online through a web demo right now."}, {"start": 156.0, "end": 160.0, "text": " Make sure to try it and post your results here if you find anything interesting."}, {"start": 160.0, "end": 166.0, "text": " The authors themselves may also learn something new from us about interesting new failure cases."}, {"start": 166.0, "end": 169.0, "text": " It has happened before in this series."}, {"start": 169.0, "end": 172.0, "text": " This episode has been supported by Wates and Biasis."}, {"start": 172.0, "end": 177.0, "text": " Wates and Biasis provides tools to track your experiments in your deep learning projects."}, {"start": 177.0, "end": 183.0, "text": " It is like a shared logbook for your team, and with this, you can compare your own experiment results,"}, {"start": 183.0, "end": 189.0, "text": " put them next to what your colleagues did, and you can discuss your successes and failures much easier."}, {"start": 189.0, "end": 196.0, "text": " It takes less than 5 minutes to set up, and is being used by OpenAI, Toyota Research, Stanford, and Berkeley."}, {"start": 196.0, "end": 202.0, "text": " It was also used in this OpenAI project that you see here, which we covered earlier in the series."}, {"start": 202.0, "end": 210.0, "text": " They reported that experiment tracking was crucial in this project, and that this tool saved them quite a bit of time and money."}, {"start": 210.0, "end": 219.0, "text": " If only I had access to such a tool during our last research project where I had to compare the performance of neural networks for months and months."}, {"start": 219.0, "end": 229.0, "text": " Well, it turns out I will be able to get access to these tools because get this, it's free, and will always be free for academics and open source projects."}, {"start": 229.0, "end": 241.0, "text": " Make sure to visit them through WendeeB.com, slash papers, WendeeB.com slash papers, or just click the link in the video description and sign up for a free demo today."}, {"start": 241.0, "end": 245.0, "text": " Our thanks to Wates and Biasis for helping us make better videos for you."}, {"start": 245.0, "end": 260.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=IMZkLVBhcig
DeepMind’s New AI Dreams Up Videos on Many Topics
📝 The paper "Efficient Video Generation on Complex Datasets" is available here: https://arxiv.org/abs/1907.06571 ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojola Ifahir. In the last few years, the pace of progress in machine learning research has been staggering. Neural network-based learning algorithms are now able to look at an image and describe what's seen in this image or even better the other way around, generating images from a written description. You see here, a set of results from BigGam, a state-of-the-art image generation technique and marvel at the fact that all of these images are indeed synthetic. The GAM part of this technique abbreviates the term generative adversarial network. This means a pair of neural networks that battle each other over time to master a task, for instance, to generate realistic-looking images when given a theme. These detailed images are great, but what about generating video? With the dual video discriminator again, DVD GAM in short, DeepMind's naming game is still as strong as ever, it is now possible to create longer and higher resolution videos than was previously possible. The exact numbers are 256x256 in terms of resolution and 48 frames, which is about 2 seconds. It also learned the concept of changes in the camera view, zooming in on an object and understands that if someone draws something with a pen, the ink has to remain on the paper unchanged. The dual discriminator part of the name reveals one of the key ideas of the paper. In a classical GAM, we have a discriminator network that looks at the images of the generator network and critiques them. As a result, the discriminator learns to tell fake and real images apart better, but, at the same time, provides ample feedback for the generator neural network so it can come up with better images. In this work, we have not one, but two discriminators. One is called a spatial discriminator that looks at just one image and assesses how good it is structurally, while the second temporal discriminator critiques the quality of movement in these videos. This additional information provides better teaching for the generator, which will, in a way, be able to generate better videos for us. The paper contains all the details that you could possibly want to learn about this algorithm. In fact, let me give you two that I found to be particularly interesting. One, it does not get any additional information about where the foreground and the background is, and is able to leverage the learning capacity of these neural networks to learn these concepts by itself. And two, it does not generate the video frame by frame sequentially, but it creates the entire video in one go. That's wild. Now, 256 by 256 is not a particularly high video resolution, but if you have been watching this series for a while, you are probably already saying that two more papers down the line and we may be watching HD videos that are also longer than we have the patience to watch. All this through the power of machine learning research. For now, let's applaud deep mind for this amazing paper and I can't wait to have a look at more results and see some follow-up works on it. What a time to be alive. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is two-minute papers with Karojola Ifahir."}, {"start": 4.36, "end": 9.96, "text": " In the last few years, the pace of progress in machine learning research has been staggering."}, {"start": 9.96, "end": 15.040000000000001, "text": " Neural network-based learning algorithms are now able to look at an image and describe"}, {"start": 15.040000000000001, "end": 20.72, "text": " what's seen in this image or even better the other way around, generating images from"}, {"start": 20.72, "end": 22.12, "text": " a written description."}, {"start": 22.12, "end": 27.2, "text": " You see here, a set of results from BigGam, a state-of-the-art image generation technique"}, {"start": 27.2, "end": 31.8, "text": " and marvel at the fact that all of these images are indeed synthetic."}, {"start": 31.8, "end": 36.84, "text": " The GAM part of this technique abbreviates the term generative adversarial network."}, {"start": 36.84, "end": 42.6, "text": " This means a pair of neural networks that battle each other over time to master a task,"}, {"start": 42.6, "end": 46.8, "text": " for instance, to generate realistic-looking images when given a theme."}, {"start": 46.8, "end": 51.68, "text": " These detailed images are great, but what about generating video?"}, {"start": 51.68, "end": 57.44, "text": " With the dual video discriminator again, DVD GAM in short, DeepMind's naming game is still"}, {"start": 57.44, "end": 63.4, "text": " as strong as ever, it is now possible to create longer and higher resolution videos than"}, {"start": 63.4, "end": 65.03999999999999, "text": " was previously possible."}, {"start": 65.03999999999999, "end": 72.88, "text": " The exact numbers are 256x256 in terms of resolution and 48 frames, which is about"}, {"start": 72.88, "end": 74.24, "text": " 2 seconds."}, {"start": 74.24, "end": 79.72, "text": " It also learned the concept of changes in the camera view, zooming in on an object and"}, {"start": 79.72, "end": 85.32, "text": " understands that if someone draws something with a pen, the ink has to remain on the paper"}, {"start": 85.32, "end": 86.32, "text": " unchanged."}, {"start": 86.32, "end": 91.4, "text": " The dual discriminator part of the name reveals one of the key ideas of the paper."}, {"start": 91.4, "end": 96.24, "text": " In a classical GAM, we have a discriminator network that looks at the images of the generator"}, {"start": 96.24, "end": 98.56, "text": " network and critiques them."}, {"start": 98.56, "end": 104.56, "text": " As a result, the discriminator learns to tell fake and real images apart better, but, at"}, {"start": 104.56, "end": 109.8, "text": " the same time, provides ample feedback for the generator neural network so it can come"}, {"start": 109.8, "end": 111.72, "text": " up with better images."}, {"start": 111.72, "end": 115.64, "text": " In this work, we have not one, but two discriminators."}, {"start": 115.64, "end": 121.32000000000001, "text": " One is called a spatial discriminator that looks at just one image and assesses how good"}, {"start": 121.32000000000001, "end": 127.68, "text": " it is structurally, while the second temporal discriminator critiques the quality of movement"}, {"start": 127.68, "end": 129.12, "text": " in these videos."}, {"start": 129.12, "end": 133.72, "text": " This additional information provides better teaching for the generator, which will, in"}, {"start": 133.72, "end": 137.44, "text": " a way, be able to generate better videos for us."}, {"start": 137.44, "end": 142.48, "text": " The paper contains all the details that you could possibly want to learn about this algorithm."}, {"start": 142.48, "end": 147.24, "text": " In fact, let me give you two that I found to be particularly interesting."}, {"start": 147.24, "end": 152.6, "text": " One, it does not get any additional information about where the foreground and the background"}, {"start": 152.6, "end": 157.84, "text": " is, and is able to leverage the learning capacity of these neural networks to learn these"}, {"start": 157.84, "end": 160.12, "text": " concepts by itself."}, {"start": 160.12, "end": 165.76, "text": " And two, it does not generate the video frame by frame sequentially, but it creates the"}, {"start": 165.76, "end": 168.88, "text": " entire video in one go."}, {"start": 168.88, "end": 169.88, "text": " That's wild."}, {"start": 169.88, "end": 177.56, "text": " Now, 256 by 256 is not a particularly high video resolution, but if you have been watching"}, {"start": 177.56, "end": 182.24, "text": " this series for a while, you are probably already saying that two more papers down the"}, {"start": 182.24, "end": 187.56, "text": " line and we may be watching HD videos that are also longer than we have the patience to"}, {"start": 187.56, "end": 188.56, "text": " watch."}, {"start": 188.56, "end": 192.04, "text": " All this through the power of machine learning research."}, {"start": 192.04, "end": 197.24, "text": " For now, let's applaud deep mind for this amazing paper and I can't wait to have a look"}, {"start": 197.24, "end": 200.36, "text": " at more results and see some follow-up works on it."}, {"start": 200.36, "end": 201.72, "text": " What a time to be alive."}, {"start": 201.72, "end": 230.16, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=hkSfHCtpnHU
AI Learns Facial Animation in VR
❤️ Check out Linode here and get $20 free credit on your account: https://www.linode.com/papers 📝 The paper "VR Facial Animation via Multiview Image Translation" is available here: https://research.fb.com/publications/vr-facial-animation-via-multiview-image-translation/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #VR
Dear Fellow Scholars, this is 2 Minute Papers with Károly Zsolnai-Fehér. One of the main promises of virtual reality, VR in short, is enhancing the quality of our remote interactions. With VR, we could talk with our colleagues and beloved ones through telepresence applications that create a virtual avatar of us, much like the ones you see here. Normally, this requires putting sensors all over our faces to be able to reconstruct the gestures we make. A previous work used a depth camera that was hanging off of the VR headset, thus having a better look at the entirety of our face while a later work used a mouth camera to solve this problem. This new paper attempts to capture all of our gestures by using a headset without these additional complexities by using no more than three infrared cameras. No extra devices hanging off of the headset, nothing. All of them are built into the headpiece. This means two key challenges. One is the fact that the sensor below sees the face in an uncomfortable, oblique angle, below you see exactly the data that is being captured by the three sensors. And two, the output of this process should be a virtual avatar, but it is unclear what the correspondence between all this data and the animated character should be. So the idea sounds great, the only problem is that this is near impossible. So how did the researchers end up doing this? Well, what they did is they built a prototype headset with six additional sensors. Now wearing this headset would perhaps not be too much more convenient than the previous works we've looked at a moment ago. But don't judge this work just yet because this additional information is required to create the output avatar and then the smaller three sensor headset can be trained by dropping these additional views. In short, the augmented, more complex camera is used as a crutch to train the smaller headset. Amazing idea, I love it. Our more experienced fellow scholars also know that there is a little style transfer magic being done here. And finally, all of these partial views are then stitched together into the final avatar. You can also see here that it smokes the competition, uses only three sensors and does all this in real time. Wow, if you want to show your friends how you are about to sneeze in the highest possible quality video footage, look no further. Now, I'm a research scientist by day and I also run my own projects where I cannot choose my own hosting provider and every time I have problems with it, I tell my wife that I wish we could use Linode. Linode is the world's largest independent cloud hosting and computing provider and they just introduced a GPU server pilot program. These GPU instances are tailor made for AI, scientific computing and computer graphics projects. Yes, exactly the kind of works you see here in this series. If you feel inspired by these works and you wish to run your own experiments or deploy your already existing works through a simple and reliable hosting service, make sure to join over 800,000 other happy customers and choose Linode. Note that this is a pilot program with limited availability. To reserve your GPU instance at a discounted rate, make sure to visit linode.com slash papers or click the link in the description and use the promo code papers20 to get $20 free on your account. You also get super fast storage and proper support if you have any questions. Give it a try today. Our thanks to Linode for supporting the series and helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.44, "text": " Dear Fellow Scholars, this is 2 Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 5.44, "end": 11.16, "text": " One of the main promises of virtual reality, VR in short, is enhancing the quality of our"}, {"start": 11.16, "end": 12.88, "text": " remote interactions."}, {"start": 12.88, "end": 18.16, "text": " With VR, we could talk with our colleagues and beloved ones through telepresence applications"}, {"start": 18.16, "end": 23.2, "text": " that create a virtual avatar of us, much like the ones you see here."}, {"start": 23.2, "end": 28.12, "text": " Normally, this requires putting sensors all over our faces to be able to reconstruct"}, {"start": 28.12, "end": 29.68, "text": " the gestures we make."}, {"start": 29.68, "end": 35.56, "text": " A previous work used a depth camera that was hanging off of the VR headset, thus having"}, {"start": 35.56, "end": 40.6, "text": " a better look at the entirety of our face while a later work used a mouth camera to solve"}, {"start": 40.6, "end": 41.8, "text": " this problem."}, {"start": 41.8, "end": 47.04, "text": " This new paper attempts to capture all of our gestures by using a headset without these"}, {"start": 47.04, "end": 52.92, "text": " additional complexities by using no more than three infrared cameras."}, {"start": 52.92, "end": 56.72, "text": " No extra devices hanging off of the headset, nothing."}, {"start": 56.72, "end": 59.4, "text": " All of them are built into the headpiece."}, {"start": 59.4, "end": 61.68, "text": " This means two key challenges."}, {"start": 61.68, "end": 67.4, "text": " One is the fact that the sensor below sees the face in an uncomfortable, oblique angle,"}, {"start": 67.4, "end": 72.16, "text": " below you see exactly the data that is being captured by the three sensors."}, {"start": 72.16, "end": 78.12, "text": " And two, the output of this process should be a virtual avatar, but it is unclear what"}, {"start": 78.12, "end": 83.4, "text": " the correspondence between all this data and the animated character should be."}, {"start": 83.4, "end": 88.32, "text": " So the idea sounds great, the only problem is that this is near impossible."}, {"start": 88.32, "end": 91.39999999999999, "text": " So how did the researchers end up doing this?"}, {"start": 91.39999999999999, "end": 97.39999999999999, "text": " Well, what they did is they built a prototype headset with six additional sensors."}, {"start": 97.39999999999999, "end": 102.44, "text": " Now wearing this headset would perhaps not be too much more convenient than the previous"}, {"start": 102.44, "end": 105.08, "text": " works we've looked at a moment ago."}, {"start": 105.08, "end": 110.35999999999999, "text": " But don't judge this work just yet because this additional information is required to create"}, {"start": 110.35999999999999, "end": 117.08, "text": " the output avatar and then the smaller three sensor headset can be trained by dropping"}, {"start": 117.08, "end": 118.96, "text": " these additional views."}, {"start": 118.96, "end": 125.75999999999999, "text": " In short, the augmented, more complex camera is used as a crutch to train the smaller headset."}, {"start": 125.75999999999999, "end": 128.52, "text": " Amazing idea, I love it."}, {"start": 128.52, "end": 132.8, "text": " Our more experienced fellow scholars also know that there is a little style transfer magic"}, {"start": 132.8, "end": 134.4, "text": " being done here."}, {"start": 134.4, "end": 145.36, "text": " And finally, all of these partial views are then stitched together into the final avatar."}, {"start": 145.36, "end": 151.60000000000002, "text": " You can also see here that it smokes the competition, uses only three sensors and does all this"}, {"start": 151.60000000000002, "end": 153.32000000000002, "text": " in real time."}, {"start": 153.32000000000002, "end": 159.16000000000003, "text": " Wow, if you want to show your friends how you are about to sneeze in the highest possible"}, {"start": 159.16000000000003, "end": 162.0, "text": " quality video footage, look no further."}, {"start": 162.0, "end": 167.4, "text": " Now, I'm a research scientist by day and I also run my own projects where I cannot choose"}, {"start": 167.4, "end": 172.68, "text": " my own hosting provider and every time I have problems with it, I tell my wife that I wish"}, {"start": 172.68, "end": 174.28000000000003, "text": " we could use Linode."}, {"start": 174.28, "end": 179.32, "text": " Linode is the world's largest independent cloud hosting and computing provider and they"}, {"start": 179.32, "end": 183.28, "text": " just introduced a GPU server pilot program."}, {"start": 183.28, "end": 189.28, "text": " These GPU instances are tailor made for AI, scientific computing and computer graphics"}, {"start": 189.28, "end": 190.28, "text": " projects."}, {"start": 190.28, "end": 194.4, "text": " Yes, exactly the kind of works you see here in this series."}, {"start": 194.4, "end": 199.56, "text": " If you feel inspired by these works and you wish to run your own experiments or deploy"}, {"start": 199.56, "end": 204.92000000000002, "text": " your already existing works through a simple and reliable hosting service, make sure to join"}, {"start": 204.92000000000002, "end": 209.68, "text": " over 800,000 other happy customers and choose Linode."}, {"start": 209.68, "end": 213.2, "text": " Note that this is a pilot program with limited availability."}, {"start": 213.2, "end": 220.12, "text": " To reserve your GPU instance at a discounted rate, make sure to visit linode.com slash papers"}, {"start": 220.12, "end": 226.48000000000002, "text": " or click the link in the description and use the promo code papers20 to get $20 free"}, {"start": 226.48000000000002, "end": 227.8, "text": " on your account."}, {"start": 227.8, "end": 232.12, "text": " You also get super fast storage and proper support if you have any questions."}, {"start": 232.12, "end": 233.36, "text": " Give it a try today."}, {"start": 233.36, "end": 238.28, "text": " Our thanks to Linode for supporting the series and helping us make better videos for you."}, {"start": 238.28, "end": 266.28, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=CSQPD3oyvD8
Simulating Water and Debris Flows
❤️ You can support the show through Patreon: https://www.patreon.com/TwoMinutePapers 📝 The paper "Animating Fluid Sediment Mixture in Particle-Laden Flows" is available here: http://pages.cs.wisc.edu/~sifakis/papers/MPM-particle-laden-flow.pdf https://dl.acm.org/citation.cfm?id=3201309 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute paper sweet caro jjona ifahir. We recently talked about an amazing paper that uses mixture theory to simulate the interaction of liquids and fabrics. And this new work is about simulating fluid flows where we have some debris or other foreign particles in our liquids. This is really challenging. For example, one of the key challenges is incorporating two-way coupling into this process. This means that the sand is allowed to have an effect on the fluid, but at the same time as the fluid sloshes around, it also moves the sand particles within. Now, before you start wondering whether this is real footage or not, the fact that this is a simulation should become clear now because what you see here in the background is where the movement of the two domains are shown in isolation. Just look at how much interaction there is between the two. Unbelieveable. Beautiful simulation. I scream for your eyes. This new method also contains a novel, density correction step, and if you watch closely here, you'll notice why. Got it? Let's watch it again. If we try to run this Elastoplastic simulation for these two previous methods, they introduce here again in density or in other words, we end up with more stuff than we started with. These two rows show the number of particles in the simulation in the worst case scenario, and as you see, some of these incorporate millions of particles for the fluid and many hundreds of thousands for the sediment. Since this work uses the material point method, which is a hybrid simulation technique that uses both particles and grids, the delta x row denotes the resolution of the simulation grid. Now, since these grids are often used for 3D simulations, we need to raise the 256 and the 512 to the third power, and with that, we get a simulation grid with up to hundreds of millions of points, and we haven't even talked about the particle representation yet. In the face of all of these challenges, the simulator is able to compute one frame in a matter of minutes and not hours or days, which is an incredible feat. But this, I think it is easy to see that computer graphics research is improving at a staggering pace. What a time to be alive. If you enjoyed this episode, please consider supporting us through Patreon. Our address is patreon.com slash 2 minute papers, or just like the link in the video description. With this, we can make better videos for you. You can also get your name immortalized in the video description as a key supporter, or watch these videos earlier than others. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.62, "text": " Dear Fellow Scholars, this is two-minute paper sweet caro jjona ifahir."}, {"start": 4.62, "end": 10.74, "text": " We recently talked about an amazing paper that uses mixture theory to simulate the interaction"}, {"start": 10.74, "end": 12.96, "text": " of liquids and fabrics."}, {"start": 12.96, "end": 17.56, "text": " And this new work is about simulating fluid flows where we have some debris or other"}, {"start": 17.56, "end": 20.14, "text": " foreign particles in our liquids."}, {"start": 20.14, "end": 22.1, "text": " This is really challenging."}, {"start": 22.1, "end": 28.32, "text": " For example, one of the key challenges is incorporating two-way coupling into this process."}, {"start": 28.32, "end": 33.44, "text": " This means that the sand is allowed to have an effect on the fluid, but at the same time"}, {"start": 33.44, "end": 38.2, "text": " as the fluid sloshes around, it also moves the sand particles within."}, {"start": 38.2, "end": 42.96, "text": " Now, before you start wondering whether this is real footage or not, the fact that this"}, {"start": 42.96, "end": 48.120000000000005, "text": " is a simulation should become clear now because what you see here in the background is where"}, {"start": 48.120000000000005, "end": 52.16, "text": " the movement of the two domains are shown in isolation."}, {"start": 52.16, "end": 55.72, "text": " Just look at how much interaction there is between the two."}, {"start": 55.72, "end": 56.72, "text": " Unbelieveable."}, {"start": 56.72, "end": 58.4, "text": " Beautiful simulation."}, {"start": 58.4, "end": 60.199999999999996, "text": " I scream for your eyes."}, {"start": 60.199999999999996, "end": 65.28, "text": " This new method also contains a novel, density correction step, and if you watch closely"}, {"start": 65.28, "end": 67.48, "text": " here, you'll notice why."}, {"start": 67.48, "end": 69.28, "text": " Got it?"}, {"start": 69.28, "end": 70.52, "text": " Let's watch it again."}, {"start": 70.52, "end": 75.8, "text": " If we try to run this Elastoplastic simulation for these two previous methods, they introduce"}, {"start": 75.8, "end": 84.75999999999999, "text": " here again in density or in other words, we end up with more stuff than we started with."}, {"start": 84.76, "end": 89.88000000000001, "text": " These two rows show the number of particles in the simulation in the worst case scenario,"}, {"start": 89.88000000000001, "end": 95.52000000000001, "text": " and as you see, some of these incorporate millions of particles for the fluid and many hundreds"}, {"start": 95.52000000000001, "end": 97.72, "text": " of thousands for the sediment."}, {"start": 97.72, "end": 102.60000000000001, "text": " Since this work uses the material point method, which is a hybrid simulation technique that"}, {"start": 102.60000000000001, "end": 109.36000000000001, "text": " uses both particles and grids, the delta x row denotes the resolution of the simulation"}, {"start": 109.36000000000001, "end": 110.36000000000001, "text": " grid."}, {"start": 110.36, "end": 116.88, "text": " Now, since these grids are often used for 3D simulations, we need to raise the 256 and"}, {"start": 116.88, "end": 123.28, "text": " the 512 to the third power, and with that, we get a simulation grid with up to hundreds"}, {"start": 123.28, "end": 128.88, "text": " of millions of points, and we haven't even talked about the particle representation yet."}, {"start": 128.88, "end": 134.07999999999998, "text": " In the face of all of these challenges, the simulator is able to compute one frame in"}, {"start": 134.07999999999998, "end": 139.28, "text": " a matter of minutes and not hours or days, which is an incredible feat."}, {"start": 139.28, "end": 144.68, "text": " But this, I think it is easy to see that computer graphics research is improving at a staggering"}, {"start": 144.68, "end": 145.84, "text": " pace."}, {"start": 145.84, "end": 147.4, "text": " What a time to be alive."}, {"start": 147.4, "end": 151.32, "text": " If you enjoyed this episode, please consider supporting us through Patreon."}, {"start": 151.32, "end": 158.04, "text": " Our address is patreon.com slash 2 minute papers, or just like the link in the video description."}, {"start": 158.04, "end": 160.36, "text": " With this, we can make better videos for you."}, {"start": 160.36, "end": 165.2, "text": " You can also get your name immortalized in the video description as a key supporter,"}, {"start": 165.2, "end": 168.32, "text": " or watch these videos earlier than others."}, {"start": 168.32, "end": 172.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wVtOuvFlczg
Augmented Reality Presentations Are Coming!
📝 The paper "Interactive Body-Driven Graphics for Augmented Video Performance" is available here: https://1iyiwei.github.io/ibg-chi19/ https://hal.archives-ouvertes.fr/hal-02005318/document ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #ar #metaverse
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir. In this series, we talk about amazing research papers. However, when a paper is published, also a talk often has to be given at a conference. And this paper is about the talk itself, or more precisely, how to enhance your presentation with dynamic graphics. Now, these effects can be added to music videos and documentary movies, however, they take a long time and cost a fortune. But not these ones, because this paper proposes a simple framework in which the presenter stands before a Kinect camera and an AR mirror monitor and can trigger these cool little graphical elements with simple gestures. A key part of the paper is the description of a user interface where we can design these mappings. This skeleton represents the presenter who is tracked by the Kinect camera, and as you see here, we can define interactions between these elements and the presenter, such as grabbing the umbrella, pull up a chart, and more. As you see with the examples here, using such a system leads to more immersive storytelling, and note that again, this is an early implementation of this really cool idea. A few more papers down the line, I can imagine rotatable and deformable 3D models and photorealistic rendering entering the scene. Well, sign me up for that. If you have any creative ideas as to how this could be used or improved, make sure to leave a comment. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.48, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir."}, {"start": 4.48, "end": 7.68, "text": " In this series, we talk about amazing research papers."}, {"start": 7.68, "end": 13.48, "text": " However, when a paper is published, also a talk often has to be given at a conference."}, {"start": 13.48, "end": 19.8, "text": " And this paper is about the talk itself, or more precisely, how to enhance your presentation"}, {"start": 19.8, "end": 21.32, "text": " with dynamic graphics."}, {"start": 21.32, "end": 26.92, "text": " Now, these effects can be added to music videos and documentary movies, however, they take"}, {"start": 26.92, "end": 30.12, "text": " a long time and cost a fortune."}, {"start": 30.12, "end": 35.08, "text": " But not these ones, because this paper proposes a simple framework in which the presenter"}, {"start": 35.08, "end": 41.14, "text": " stands before a Kinect camera and an AR mirror monitor and can trigger these cool little"}, {"start": 41.14, "end": 43.96, "text": " graphical elements with simple gestures."}, {"start": 43.96, "end": 48.72, "text": " A key part of the paper is the description of a user interface where we can design these"}, {"start": 48.72, "end": 50.0, "text": " mappings."}, {"start": 50.0, "end": 55.1, "text": " This skeleton represents the presenter who is tracked by the Kinect camera, and as you"}, {"start": 55.1, "end": 60.620000000000005, "text": " see here, we can define interactions between these elements and the presenter, such as grabbing"}, {"start": 60.620000000000005, "end": 64.14, "text": " the umbrella, pull up a chart, and more."}, {"start": 64.14, "end": 69.82000000000001, "text": " As you see with the examples here, using such a system leads to more immersive storytelling,"}, {"start": 69.82000000000001, "end": 74.38, "text": " and note that again, this is an early implementation of this really cool idea."}, {"start": 74.38, "end": 80.94, "text": " A few more papers down the line, I can imagine rotatable and deformable 3D models and photorealistic"}, {"start": 80.94, "end": 83.1, "text": " rendering entering the scene."}, {"start": 83.1, "end": 85.14, "text": " Well, sign me up for that."}, {"start": 85.14, "end": 89.86, "text": " If you have any creative ideas as to how this could be used or improved, make sure to leave"}, {"start": 89.86, "end": 90.86, "text": " a comment."}, {"start": 90.86, "end": 120.42, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=u90TbxK7VEA
This Superhuman Poker AI Was Trained in 20 Hours!
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers Weights & Biases blog post with the 1 line of code visualization: https://www.wandb.com/articles/visualize-keras-models-with-one-line-of-code 📝 The paper "Superhuman AI for multiplayer poker" is available here: - https://ai.facebook.com/blog/pluribus-first-ai-to-beat-pros-in-6-player-poker/ - https://www.cs.cmu.edu/~noamb/papers/19-Science-Superhuman.pdf - https://science.sciencemag.org/content/early/2019/07/10/science.aay2400 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #Poker #PokerAI
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonaifahir. Today, the game we'll be talking about is the 6 player, no limit hold and poker, which is one of the more popular poker variants out there. And the goal of this project was to build a poker AI that never played against a human before and learns entirely through self play and is able to defeat professional human players. During these tests, two of the players that were tested against are former World Series of Poker main event winners. And of course, before you ask, yes, in a moment we'll look at an example hand that shows how the AI traps a human player. Poker is very difficult to learn for AI bots because it is a game of imperfect information. For instance, chess is a game of perfect information where we see all the pieces and can make a good decision if we analyze the situation well. However, not so much in poker because only at the very end of the hand do the players show what they have. This makes it extremely difficult to train an AI to do well. And now, let's have a look at the promised example hand here. We talked about imperfect information just a moment ago, so I'll note that all the cards are shown face up for us to make the analysis of this hand easier. Of course, this is not how the hands were played. You see the AI up here marked with P2 sitting pretty with a jack and a queen and before the flop happens, which is when the first three cards are revealed, only one human player seems to be interested in this hand. During the flop, the AI paired its queen and has a jack as a kicker, which, if played well, is going to be disastrous for the human player. So why is that? You see, the human player also paired their queen but has a weaker kicker and will therefore lose to the AI's hand. In this case, these players think they have a strong hand and will get lots of value out of it, only to find out that they will be the one milked by the AI. So how exactly does that happen? Well, look here carefully. The bot shows weakness by checking here to which the human player's answer is a small raise. But again shows weakness by just calling the raise and checking again on the turn, essentially saying, I am weak, don't hurt me. By the time we get to the river, the AI, again, appears weak to the human player who now tries to make the bot with a mid-size raise and the AI recognizes that now is the time to pounce, the confused player calls the bot and gets milked for almost all their money. An excellent slowplay from the AI. Now, note that one hand is difficult to evaluate in isolation. This was a great hand indeed, but we need to look at entire games to get a better grasp of the capabilities of this AI. So if we look at the dollar equivalent value of the chips in the game, the AI was able to win $1,000 from these five professional poker players every hour. It also uses very little resources, can be trained in the cloud for only several hundred dollars and exceeds human level performance within only 20 hours. What you see here is a decision tree that explains how the algorithm figures out whether to check or bet, and as you see here, this tree is traversed in a depth first way, so first it descends deep into one possible decision and later, as more options are being enrolled and evaluated, the probability of these choices are updated above. In simpler words, first the AI seems somewhat sure that checking would be the good choice here, but after carefully evaluating both decisions, it is able to further reinforce this choice. One of the professional players noted that the bot is a much more efficient bluffer than a human and always puts on a lot of pressure. Now note that this is also a general learning technique and is not tailored specifically for poker and as a result, the authors of the paper noted that they will also try it on other imperfect information games in the future. What a time to be alive! This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It can save you a ton of time and money in these projects and is being used by OpenAI, Toyota Research, Stanford and Berkeley. It is really easy to use. In fact, this blog post describes how you can visualize your Keras models with only one line of code. When you run this model, it will also start saving relevant metrics for you and here, you can see the visualization of the mentioned model and these metrics as well. That's it. You're done. It can do a lot more than this, of course and you know what the best part is. The best part is that it's free and will always be free for academics and open source projects. Make sure to visit them through whendb.com slash papers, w-a-n-db.com slash papers or just click the link in the video description and sign up for a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonaifahir."}, {"start": 4.4, "end": 9.88, "text": " Today, the game we'll be talking about is the 6 player, no limit hold and poker, which"}, {"start": 9.88, "end": 12.92, "text": " is one of the more popular poker variants out there."}, {"start": 12.92, "end": 18.04, "text": " And the goal of this project was to build a poker AI that never played against a human"}, {"start": 18.04, "end": 25.96, "text": " before and learns entirely through self play and is able to defeat professional human players."}, {"start": 25.96, "end": 30.560000000000002, "text": " During these tests, two of the players that were tested against are former World Series"}, {"start": 30.560000000000002, "end": 32.96, "text": " of Poker main event winners."}, {"start": 32.96, "end": 38.56, "text": " And of course, before you ask, yes, in a moment we'll look at an example hand that shows"}, {"start": 38.56, "end": 41.56, "text": " how the AI traps a human player."}, {"start": 41.56, "end": 47.72, "text": " Poker is very difficult to learn for AI bots because it is a game of imperfect information."}, {"start": 47.72, "end": 53.52, "text": " For instance, chess is a game of perfect information where we see all the pieces and can make"}, {"start": 53.52, "end": 56.84, "text": " a good decision if we analyze the situation well."}, {"start": 56.84, "end": 62.400000000000006, "text": " However, not so much in poker because only at the very end of the hand do the players"}, {"start": 62.400000000000006, "end": 64.12, "text": " show what they have."}, {"start": 64.12, "end": 67.96000000000001, "text": " This makes it extremely difficult to train an AI to do well."}, {"start": 67.96000000000001, "end": 71.24000000000001, "text": " And now, let's have a look at the promised example hand here."}, {"start": 71.24000000000001, "end": 75.96000000000001, "text": " We talked about imperfect information just a moment ago, so I'll note that all the cards"}, {"start": 75.96000000000001, "end": 80.36, "text": " are shown face up for us to make the analysis of this hand easier."}, {"start": 80.36, "end": 83.28, "text": " Of course, this is not how the hands were played."}, {"start": 83.28, "end": 89.04, "text": " You see the AI up here marked with P2 sitting pretty with a jack and a queen and before"}, {"start": 89.04, "end": 94.72, "text": " the flop happens, which is when the first three cards are revealed, only one human player"}, {"start": 94.72, "end": 97.44, "text": " seems to be interested in this hand."}, {"start": 97.44, "end": 103.2, "text": " During the flop, the AI paired its queen and has a jack as a kicker, which, if played well,"}, {"start": 103.2, "end": 106.4, "text": " is going to be disastrous for the human player."}, {"start": 106.4, "end": 108.32, "text": " So why is that?"}, {"start": 108.32, "end": 113.67999999999999, "text": " You see, the human player also paired their queen but has a weaker kicker and will therefore"}, {"start": 113.67999999999999, "end": 115.52, "text": " lose to the AI's hand."}, {"start": 115.52, "end": 120.52, "text": " In this case, these players think they have a strong hand and will get lots of value out"}, {"start": 120.52, "end": 125.44, "text": " of it, only to find out that they will be the one milked by the AI."}, {"start": 125.44, "end": 127.88, "text": " So how exactly does that happen?"}, {"start": 127.88, "end": 130.07999999999998, "text": " Well, look here carefully."}, {"start": 130.07999999999998, "end": 135.56, "text": " The bot shows weakness by checking here to which the human player's answer is a small"}, {"start": 135.56, "end": 136.56, "text": " raise."}, {"start": 136.56, "end": 142.84, "text": " But again shows weakness by just calling the raise and checking again on the turn, essentially"}, {"start": 142.84, "end": 149.4, "text": " saying, I am weak, don't hurt me."}, {"start": 149.4, "end": 155.12, "text": " By the time we get to the river, the AI, again, appears weak to the human player who now"}, {"start": 155.12, "end": 161.0, "text": " tries to make the bot with a mid-size raise and the AI recognizes that now is the time"}, {"start": 161.0, "end": 167.36, "text": " to pounce, the confused player calls the bot and gets milked for almost all their money."}, {"start": 167.36, "end": 169.84, "text": " An excellent slowplay from the AI."}, {"start": 169.84, "end": 174.24, "text": " Now, note that one hand is difficult to evaluate in isolation."}, {"start": 174.24, "end": 179.04, "text": " This was a great hand indeed, but we need to look at entire games to get a better grasp"}, {"start": 179.04, "end": 181.72, "text": " of the capabilities of this AI."}, {"start": 181.72, "end": 186.56, "text": " So if we look at the dollar equivalent value of the chips in the game, the AI was able"}, {"start": 186.56, "end": 192.36, "text": " to win $1,000 from these five professional poker players every hour."}, {"start": 192.36, "end": 197.52, "text": " It also uses very little resources, can be trained in the cloud for only several hundred"}, {"start": 197.52, "end": 203.6, "text": " dollars and exceeds human level performance within only 20 hours."}, {"start": 203.6, "end": 208.48000000000002, "text": " What you see here is a decision tree that explains how the algorithm figures out whether"}, {"start": 208.48000000000002, "end": 214.48000000000002, "text": " to check or bet, and as you see here, this tree is traversed in a depth first way, so"}, {"start": 214.48, "end": 221.67999999999998, "text": " first it descends deep into one possible decision and later, as more options are being enrolled"}, {"start": 221.67999999999998, "end": 226.51999999999998, "text": " and evaluated, the probability of these choices are updated above."}, {"start": 226.51999999999998, "end": 231.56, "text": " In simpler words, first the AI seems somewhat sure that checking would be the good choice"}, {"start": 231.56, "end": 237.23999999999998, "text": " here, but after carefully evaluating both decisions, it is able to further reinforce this"}, {"start": 237.23999999999998, "end": 238.64, "text": " choice."}, {"start": 238.64, "end": 242.64, "text": " One of the professional players noted that the bot is a much more efficient bluffer"}, {"start": 242.64, "end": 246.48, "text": " than a human and always puts on a lot of pressure."}, {"start": 246.48, "end": 251.39999999999998, "text": " Now note that this is also a general learning technique and is not tailored specifically"}, {"start": 251.39999999999998, "end": 256.59999999999997, "text": " for poker and as a result, the authors of the paper noted that they will also try it"}, {"start": 256.59999999999997, "end": 259.96, "text": " on other imperfect information games in the future."}, {"start": 259.96, "end": 261.84, "text": " What a time to be alive!"}, {"start": 261.84, "end": 265.28, "text": " This episode has been supported by weights and biases."}, {"start": 265.28, "end": 270.08, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 270.08, "end": 276.12, "text": " It can save you a ton of time and money in these projects and is being used by OpenAI,"}, {"start": 276.12, "end": 279.32, "text": " Toyota Research, Stanford and Berkeley."}, {"start": 279.32, "end": 281.35999999999996, "text": " It is really easy to use."}, {"start": 281.35999999999996, "end": 287.24, "text": " In fact, this blog post describes how you can visualize your Keras models with only one"}, {"start": 287.24, "end": 288.47999999999996, "text": " line of code."}, {"start": 288.47999999999996, "end": 293.76, "text": " When you run this model, it will also start saving relevant metrics for you and here,"}, {"start": 293.76, "end": 298.24, "text": " you can see the visualization of the mentioned model and these metrics as well."}, {"start": 298.24, "end": 299.24, "text": " That's it."}, {"start": 299.24, "end": 300.0, "text": " You're done."}, {"start": 300.0, "end": 304.28, "text": " It can do a lot more than this, of course and you know what the best part is."}, {"start": 304.28, "end": 310.4, "text": " The best part is that it's free and will always be free for academics and open source projects."}, {"start": 310.4, "end": 318.6, "text": " Make sure to visit them through whendb.com slash papers, w-a-n-db.com slash papers or just"}, {"start": 318.6, "end": 322.88, "text": " click the link in the video description and sign up for a free demo today."}, {"start": 322.88, "end": 326.88, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 326.88, "end": 330.71999999999997, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=fcnjHmBcLNQ
3D Style Transfer For Video is Now Possible!
📝 The paper "Stylizing Video by Example" is available here: https://dcgi.fel.cvut.cz/home/sykorad/ebsynth.html The app is available here: https://ebsynth.com Our earlier episode on StyLit: https://www.youtube.com/watch?v=S7HlxaMmWAU ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute pepper sweet caro jeonaifahir. Neural style transfer just appeared four years ago in 2015. Style transfer is an interesting problem in machine learning research where we have two input images, one for content and one for style and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to the super fun results that you see here. However, most of these works are about photos. So what about video? Well, hold on to your papers because this new work does this for video and the results are marvelous. The process goes as follows. We take a few keyframes from the video and the algorithm propagates our style to the remaining frames of the video and wow. Those are some silky smooth results. In specific, what I would like you to take a look at is the temporal coherence of the results. Proper temporal coherence means that the individual images within this video are not made independently from each other which would introduce a disturbing flickering effect. I see none of that here which makes me very, very happy. And now hold on to your papers again because this technique does not use any kind of AI. Own your own networks and other learning algorithms were used here. Okay, great, no AI. But is it any better than its AI based competitors? Well, look at this. Hell yeah. This method does this magic through building a set of guide images. For instance, a mass guide highlights the stylized objects. And sure enough, we also have a temporal guide that penalizes the algorithm for making too much of a change from one frame to the next one, ensuring that the results will be smooth. Make sure to have a look at the paper for a more exhaustive description of these guides. Now, if we make a carefully crafted mixture from these guide images and plug them in to a previous algorithm by the name stylet, we talked about this algorithm before in the series, the link is in the video description, then we get these results that made me fall out of my chair. I hope you will be more prepared and held on to your papers. Let me know in the comments. And you know what is even better? You can try this yourself because the authors made a standalone tool available free of charge, just go to eb synth.com or just click the link in the video description. Let the experiments begin. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.0200000000000005, "text": " Dear Fellow Scholars, this is two-minute pepper sweet caro jeonaifahir."}, {"start": 4.0200000000000005, "end": 9.28, "text": " Neural style transfer just appeared four years ago in 2015."}, {"start": 9.28, "end": 14.44, "text": " Style transfer is an interesting problem in machine learning research where we have two input"}, {"start": 14.44, "end": 21.28, "text": " images, one for content and one for style and the output is our content image reimagined"}, {"start": 21.28, "end": 22.76, "text": " with this new style."}, {"start": 22.76, "end": 28.32, "text": " The cool part is that the content can be a photo straight from our camera and the style"}, {"start": 28.32, "end": 32.76, "text": " can be a painting which leads to the super fun results that you see here."}, {"start": 32.76, "end": 36.24, "text": " However, most of these works are about photos."}, {"start": 36.24, "end": 38.28, "text": " So what about video?"}, {"start": 38.28, "end": 43.84, "text": " Well, hold on to your papers because this new work does this for video and the results"}, {"start": 43.84, "end": 45.480000000000004, "text": " are marvelous."}, {"start": 45.480000000000004, "end": 47.56, "text": " The process goes as follows."}, {"start": 47.56, "end": 52.56, "text": " We take a few keyframes from the video and the algorithm propagates our style to the"}, {"start": 52.56, "end": 56.44, "text": " remaining frames of the video and wow."}, {"start": 56.44, "end": 59.04, "text": " Those are some silky smooth results."}, {"start": 59.04, "end": 63.48, "text": " In specific, what I would like you to take a look at is the temporal coherence of the"}, {"start": 63.48, "end": 64.8, "text": " results."}, {"start": 64.8, "end": 70.84, "text": " Proper temporal coherence means that the individual images within this video are not made independently"}, {"start": 70.84, "end": 75.24, "text": " from each other which would introduce a disturbing flickering effect."}, {"start": 75.24, "end": 79.68, "text": " I see none of that here which makes me very, very happy."}, {"start": 79.68, "end": 85.72, "text": " And now hold on to your papers again because this technique does not use any kind of AI."}, {"start": 85.72, "end": 89.64, "text": " Own your own networks and other learning algorithms were used here."}, {"start": 89.64, "end": 92.84, "text": " Okay, great, no AI."}, {"start": 92.84, "end": 96.52, "text": " But is it any better than its AI based competitors?"}, {"start": 96.52, "end": 100.12, "text": " Well, look at this."}, {"start": 100.12, "end": 102.36, "text": " Hell yeah."}, {"start": 102.36, "end": 106.28, "text": " This method does this magic through building a set of guide images."}, {"start": 106.28, "end": 110.96000000000001, "text": " For instance, a mass guide highlights the stylized objects."}, {"start": 110.96, "end": 116.08, "text": " And sure enough, we also have a temporal guide that penalizes the algorithm for making"}, {"start": 116.08, "end": 121.44, "text": " too much of a change from one frame to the next one, ensuring that the results will be"}, {"start": 121.44, "end": 122.52, "text": " smooth."}, {"start": 122.52, "end": 126.88, "text": " Make sure to have a look at the paper for a more exhaustive description of these guides."}, {"start": 126.88, "end": 132.51999999999998, "text": " Now, if we make a carefully crafted mixture from these guide images and plug them in to"}, {"start": 132.51999999999998, "end": 137.32, "text": " a previous algorithm by the name stylet, we talked about this algorithm before in the"}, {"start": 137.32, "end": 142.79999999999998, "text": " series, the link is in the video description, then we get these results that made me fall"}, {"start": 142.79999999999998, "end": 144.28, "text": " out of my chair."}, {"start": 144.28, "end": 147.84, "text": " I hope you will be more prepared and held on to your papers."}, {"start": 147.84, "end": 149.35999999999999, "text": " Let me know in the comments."}, {"start": 149.35999999999999, "end": 151.32, "text": " And you know what is even better?"}, {"start": 151.32, "end": 157.32, "text": " You can try this yourself because the authors made a standalone tool available free of charge,"}, {"start": 157.32, "end": 162.76, "text": " just go to eb synth.com or just click the link in the video description."}, {"start": 162.76, "end": 164.28, "text": " Let the experiments begin."}, {"start": 164.28, "end": 168.2, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=OEQf0AtSSsc
Tighten the Towel! Simulating Liquid-Fabric Interactions
📸 We are now available on Instagram with short snippets of our new episodes. Check us out there! https://www.instagram.com/twominutepapers/ 📝 The paper "A Multi-Scale Model for Simulating Liquid-Fabric Interactions" is available here: http://www.cs.columbia.edu/cg/wetcloth/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. Today, I've got some fluids for you. Most hobby projects with fluid simulations involve the simulation of a piece of sloshing liquid in a virtual container. However, if you have a more elaborate project at hand, the story is not so simple anymore. This new paper elevates the quality and realism of these simulations through using mixture theory. Now, what is there to be mixed, you ask? Well, what mixture theory does for us is that it helps simulating how liquids interact with fabrics, including splashing, ringing, and more. These simulations have to take into account that the fabrics may absorb some of the liquids poured onto them and get saturated, how diffusion transports this liquid to nearby yarn strands, or what you see here is a simulation with porous plastic where water flows off off and also through this material as well. Here you see how it can simulate honey dripping down on a piece of cloth. This is a real good one. If you're a parent with small children, you probably have lots of experience with this situation and can assess the quality of this simulation really well. The visual fidelity of these simulations is truly second to none. I love it. Now the question naturally arises, how do we know if these simulations are close to what would happen in reality? We don't just make a simulation and accept it as true to life if it looks good, right? Well, of course not. The paper also contains comparisons against real world laboratory results to ensure the validity of these results, so make sure to have a look at it in the video description. And if you've been watching this series for a while, you notice that I always recommend that you check out the papers yourself. And even though it is true that these are technical write-ups that are meant to communicate results between experts, it is beneficial for everyone to also read at least a small part of it. If you do, you'll not only see beautiful craftsmanship, but you'll also learn how to make a statement and how to prove the validity of this statement. This is a skill that is necessary to find truth. So please read your papers. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir."}, {"start": 4.36, "end": 7.140000000000001, "text": " Today, I've got some fluids for you."}, {"start": 7.140000000000001, "end": 12.200000000000001, "text": " Most hobby projects with fluid simulations involve the simulation of a piece of sloshing"}, {"start": 12.200000000000001, "end": 14.48, "text": " liquid in a virtual container."}, {"start": 14.48, "end": 20.96, "text": " However, if you have a more elaborate project at hand, the story is not so simple anymore."}, {"start": 20.96, "end": 28.04, "text": " This new paper elevates the quality and realism of these simulations through using mixture theory."}, {"start": 28.04, "end": 31.119999999999997, "text": " Now, what is there to be mixed, you ask?"}, {"start": 31.119999999999997, "end": 36.92, "text": " Well, what mixture theory does for us is that it helps simulating how liquids interact"}, {"start": 36.92, "end": 41.64, "text": " with fabrics, including splashing, ringing, and more."}, {"start": 41.64, "end": 46.4, "text": " These simulations have to take into account that the fabrics may absorb some of the liquids"}, {"start": 46.4, "end": 52.44, "text": " poured onto them and get saturated, how diffusion transports this liquid to nearby yarn"}, {"start": 52.44, "end": 64.44, "text": " strands, or what you see here is a simulation with porous plastic where water flows off"}, {"start": 64.44, "end": 68.4, "text": " off and also through this material as well."}, {"start": 68.4, "end": 73.24, "text": " Here you see how it can simulate honey dripping down on a piece of cloth."}, {"start": 73.24, "end": 74.68, "text": " This is a real good one."}, {"start": 74.68, "end": 78.92, "text": " If you're a parent with small children, you probably have lots of experience with this"}, {"start": 78.92, "end": 83.8, "text": " situation and can assess the quality of this simulation really well."}, {"start": 83.8, "end": 87.96000000000001, "text": " The visual fidelity of these simulations is truly second to none."}, {"start": 87.96000000000001, "end": 89.32000000000001, "text": " I love it."}, {"start": 89.32000000000001, "end": 95.12, "text": " Now the question naturally arises, how do we know if these simulations are close to what"}, {"start": 95.12, "end": 97.32000000000001, "text": " would happen in reality?"}, {"start": 97.32000000000001, "end": 102.24000000000001, "text": " We don't just make a simulation and accept it as true to life if it looks good, right?"}, {"start": 102.24000000000001, "end": 103.92, "text": " Well, of course not."}, {"start": 103.92, "end": 109.36, "text": " The paper also contains comparisons against real world laboratory results to ensure the"}, {"start": 109.36, "end": 114.32000000000001, "text": " validity of these results, so make sure to have a look at it in the video description."}, {"start": 114.32000000000001, "end": 118.48, "text": " And if you've been watching this series for a while, you notice that I always recommend"}, {"start": 118.48, "end": 121.0, "text": " that you check out the papers yourself."}, {"start": 121.0, "end": 125.56, "text": " And even though it is true that these are technical write-ups that are meant to communicate"}, {"start": 125.56, "end": 131.04, "text": " results between experts, it is beneficial for everyone to also read at least a small"}, {"start": 131.04, "end": 132.24, "text": " part of it."}, {"start": 132.24, "end": 137.56, "text": " If you do, you'll not only see beautiful craftsmanship, but you'll also learn how to make"}, {"start": 137.56, "end": 142.08, "text": " a statement and how to prove the validity of this statement."}, {"start": 142.08, "end": 145.24, "text": " This is a skill that is necessary to find truth."}, {"start": 145.24, "end": 147.68, "text": " So please read your papers."}, {"start": 147.68, "end": 177.24, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=prMk6Znm4Bc
This AI Learns About Movement By Watching Frozen People
📝 The paper "Learning the Depths of Moving People by Watching Frozen People" is available here: https://mannequin-depth.github.io/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir. This paper is about endowing colored images with depth information, which is typically done through depth maps. Depth maps describe how far parts of the scene are from the camera and are given with a color coding where the darker the colors are, the further away the objects are. These depth maps can be used to apply a variety of effects to the image that require knowledge about the depth of the objects within. For instance, selectively defocusing parts of the image, or even removing people and inserting new objects to the scene. If we, humans look at an image, we have an intuitive understanding of it and have the knowledge to produce a depth map by pen and paper. However, this would, of course, be infeasible and would take too long, so we would prefer a machine to do it for us instead. But of course, machines don't understand the concept of 3D geometry, so they probably cannot help us with this. Or, with the power of machine learning algorithms, can they? This new paper from scientists at Google Research attempts to perform this, but with a twist. The twist is that the learning algorithm is unleashed on a dataset of what they call mannequins, or in other words, real humans are asked to stand around frozen in a variety of different positions while the camera moves around in the scene. The goal is that the algorithm would have a look at these frozen people and take into consideration the parallax of the camera movement. This means that the objects closer to the camera move more than the objects that are further away. And turns out, this kind of knowledge can be exploited so much so that if we train our AI properly, it will be able to predict the depth maps of people that are moving around even if it had only seen frozen people before. This is particularly difficult because if we have an animation, we have to make sure that the guesses are consistent across time, or else we get these annoying flickering effects that you see here with previous techniques. It is still there with the new method, especially for the background, but the improvement on the human part of the image is truly remarkable. Beyond the removal and insertion techniques we talked about earlier, I am also really excited for this method as it may open up the possibility of creating video versions of these amazing portrait mode images with many of the newer smartphones people have in their pockets. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.14, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir."}, {"start": 4.14, "end": 11.14, "text": " This paper is about endowing colored images with depth information, which is typically done through depth maps."}, {"start": 11.14, "end": 22.06, "text": " Depth maps describe how far parts of the scene are from the camera and are given with a color coding where the darker the colors are, the further away the objects are."}, {"start": 22.06, "end": 29.939999999999998, "text": " These depth maps can be used to apply a variety of effects to the image that require knowledge about the depth of the objects within."}, {"start": 29.939999999999998, "end": 38.3, "text": " For instance, selectively defocusing parts of the image, or even removing people and inserting new objects to the scene."}, {"start": 38.3, "end": 47.22, "text": " If we, humans look at an image, we have an intuitive understanding of it and have the knowledge to produce a depth map by pen and paper."}, {"start": 47.22, "end": 55.22, "text": " However, this would, of course, be infeasible and would take too long, so we would prefer a machine to do it for us instead."}, {"start": 55.22, "end": 62.019999999999996, "text": " But of course, machines don't understand the concept of 3D geometry, so they probably cannot help us with this."}, {"start": 62.019999999999996, "end": 66.22, "text": " Or, with the power of machine learning algorithms, can they?"}, {"start": 66.22, "end": 72.62, "text": " This new paper from scientists at Google Research attempts to perform this, but with a twist."}, {"start": 72.62, "end": 88.22, "text": " The twist is that the learning algorithm is unleashed on a dataset of what they call mannequins, or in other words, real humans are asked to stand around frozen in a variety of different positions while the camera moves around in the scene."}, {"start": 88.22, "end": 96.02000000000001, "text": " The goal is that the algorithm would have a look at these frozen people and take into consideration the parallax of the camera movement."}, {"start": 96.02000000000001, "end": 102.02000000000001, "text": " This means that the objects closer to the camera move more than the objects that are further away."}, {"start": 102.02, "end": 117.61999999999999, "text": " And turns out, this kind of knowledge can be exploited so much so that if we train our AI properly, it will be able to predict the depth maps of people that are moving around even if it had only seen frozen people before."}, {"start": 117.61999999999999, "end": 130.22, "text": " This is particularly difficult because if we have an animation, we have to make sure that the guesses are consistent across time, or else we get these annoying flickering effects that you see here with previous techniques."}, {"start": 130.22, "end": 138.02, "text": " It is still there with the new method, especially for the background, but the improvement on the human part of the image is truly remarkable."}, {"start": 138.02, "end": 153.82, "text": " Beyond the removal and insertion techniques we talked about earlier, I am also really excited for this method as it may open up the possibility of creating video versions of these amazing portrait mode images with many of the newer smartphones people have in their pockets."}, {"start": 153.82, "end": 160.82, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=xlrGOfvYcQc
AI Creates Near Perfect Images Of People, Dogs and More
❤️ Check out Weights & Biases here and sign up for a free demo: - Run experiments with this paper here: https://app.wandb.ai/l2k2/sonnet-sonnet_examples/runs/jizpgd0o?workspace=user-l2k2 - Free demo: https://www.wandb.com/papers 📝 The paper "Generating Diverse High-Fidelity Images with VQ-VAE-2" and its supplementary materials are available here: https://arxiv.org/abs/1906.00446 https://drive.google.com/file/d/1H2nr_Cu7OK18tRemsWn_6o5DGMNYentM/view Our latent-space material synthesis paper is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ Learning a Manifold of Fonts: http://vecg.cs.ucl.ac.uk/Projects/projects_fonts/projects_fonts.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute Papers with Karo Zonai Fahir. In the last few years, we have seen a bunch of new AI-based techniques that were specialized in generating new and novel images. This is mainly done through learning-based techniques, typically a generative adversarial network, again in short, which is an architecture where a generator neural network creates new images and passes it to a discriminator network which learns to distinguish real photos from these fake generated images. These two networks learn and improve together so much so that many of these techniques have become so realistic that we often can't tell they are synthetic images unless we look really closely. You see some examples here from BigGan, the previous technique that is based on this architecture. So in these days, many of us are wondering, is there life beyond GANs? Can they be matched in terms of visual quality by a different kind of technique? Well, have a look at this paper because it proposes a much simpler architecture that is able to generate convincing, high-resolution images quickly for a ton of different object classes. The results it is able to turn out is nothing short of amazing. Just look at that. To be able to proceed to the key idea here, we first have to talk about latent spaces. You can think of a latent space as a compressed representation that tries to capture the essence of the dataset that we have at hand. You can see a similar latent space method in action here that captures the key features that set different kinds of fonts apart and presents these options on a 2D plane. And here, you see our technique that builds a latent space for modeling a wide range of photorealistic material models. And now onto the promised key idea. As you have guessed, this new technique uses a latent space, which means that instead of thinking in pixels, it thinks more in terms of these features that commonly appear in natural photos, which also makes the generation of these images up to 30 times quicker, which is super useful, especially in the case of larger images. While we are at that, it can rapidly generate new images with the size of approximately a thousand by thousand pixels. Intrusion learning is a research field that is enjoying a great deal of popularity these days, which also means that so many papers appear every day, it's getting difficult to keep track of all of them. The complexity of the average technique is also increasing rapidly over time, and what I like most about this paper is that it shows us that surprisingly simple ideas can still lead to breakthroughs. What a time to be alive. Make sure to have a look at the paper in the description as it describes how this method is able to generate more diverse images than previous techniques and how we can measure diversity at all because that is no trivial matter. This episode has been supported by weights and biases. Wates and biases provides tools to track your experiments in your deep learning projects. It is like a shared logbook for your team, and with this, you can compare your own experiment results, put them next to what your colleagues did, and you can discuss your successes and failures much easier. It takes less than five minutes to set up and is being used by OpenAI, Toyota Research, Stanford, and Berkeley. In fact, it is so easy to add to your project, the CEO himself, Lucas, instrumented it for you for this paper, and if you look here, you can see how the output images and the reconstruction error evolve over time and you can even add your own visualizations. It is a site to be held really, so make sure to check it out in the video description, and if you liked it, visit them through whendb.com slash papers, www.swndb.com slash papers, or just use the link in the video description and sign up for a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute Papers with Karo Zonai Fahir."}, {"start": 4.28, "end": 9.52, "text": " In the last few years, we have seen a bunch of new AI-based techniques that were specialized"}, {"start": 9.52, "end": 12.36, "text": " in generating new and novel images."}, {"start": 12.36, "end": 17.96, "text": " This is mainly done through learning-based techniques, typically a generative adversarial network,"}, {"start": 17.96, "end": 22.92, "text": " again in short, which is an architecture where a generator neural network creates new"}, {"start": 22.92, "end": 29.04, "text": " images and passes it to a discriminator network which learns to distinguish real photos from"}, {"start": 29.04, "end": 31.68, "text": " these fake generated images."}, {"start": 31.68, "end": 37.32, "text": " These two networks learn and improve together so much so that many of these techniques have"}, {"start": 37.32, "end": 42.32, "text": " become so realistic that we often can't tell they are synthetic images unless we look"}, {"start": 42.32, "end": 43.92, "text": " really closely."}, {"start": 43.92, "end": 48.019999999999996, "text": " You see some examples here from BigGan, the previous technique that is based on this"}, {"start": 48.019999999999996, "end": 49.32, "text": " architecture."}, {"start": 49.32, "end": 54.32, "text": " So in these days, many of us are wondering, is there life beyond GANs?"}, {"start": 54.32, "end": 59.28, "text": " Can they be matched in terms of visual quality by a different kind of technique?"}, {"start": 59.28, "end": 64.72, "text": " Well, have a look at this paper because it proposes a much simpler architecture that is able"}, {"start": 64.72, "end": 70.92, "text": " to generate convincing, high-resolution images quickly for a ton of different object classes."}, {"start": 70.92, "end": 75.24000000000001, "text": " The results it is able to turn out is nothing short of amazing."}, {"start": 75.24000000000001, "end": 76.76, "text": " Just look at that."}, {"start": 76.76, "end": 81.68, "text": " To be able to proceed to the key idea here, we first have to talk about latent spaces."}, {"start": 81.68, "end": 87.48, "text": " You can think of a latent space as a compressed representation that tries to capture the essence"}, {"start": 87.48, "end": 89.92, "text": " of the dataset that we have at hand."}, {"start": 89.92, "end": 94.64000000000001, "text": " You can see a similar latent space method in action here that captures the key features"}, {"start": 94.64000000000001, "end": 100.44000000000001, "text": " that set different kinds of fonts apart and presents these options on a 2D plane."}, {"start": 100.44000000000001, "end": 105.80000000000001, "text": " And here, you see our technique that builds a latent space for modeling a wide range of"}, {"start": 105.80000000000001, "end": 108.60000000000001, "text": " photorealistic material models."}, {"start": 108.60000000000001, "end": 111.64000000000001, "text": " And now onto the promised key idea."}, {"start": 111.64, "end": 116.08, "text": " As you have guessed, this new technique uses a latent space, which means that instead"}, {"start": 116.08, "end": 121.72, "text": " of thinking in pixels, it thinks more in terms of these features that commonly appear in"}, {"start": 121.72, "end": 128.04, "text": " natural photos, which also makes the generation of these images up to 30 times quicker, which"}, {"start": 128.04, "end": 132.08, "text": " is super useful, especially in the case of larger images."}, {"start": 132.08, "end": 137.12, "text": " While we are at that, it can rapidly generate new images with the size of approximately"}, {"start": 137.12, "end": 140.04, "text": " a thousand by thousand pixels."}, {"start": 140.04, "end": 144.23999999999998, "text": " Intrusion learning is a research field that is enjoying a great deal of popularity these"}, {"start": 144.23999999999998, "end": 150.23999999999998, "text": " days, which also means that so many papers appear every day, it's getting difficult to"}, {"start": 150.23999999999998, "end": 152.0, "text": " keep track of all of them."}, {"start": 152.0, "end": 157.0, "text": " The complexity of the average technique is also increasing rapidly over time, and what"}, {"start": 157.0, "end": 162.95999999999998, "text": " I like most about this paper is that it shows us that surprisingly simple ideas can still"}, {"start": 162.95999999999998, "end": 164.56, "text": " lead to breakthroughs."}, {"start": 164.56, "end": 166.2, "text": " What a time to be alive."}, {"start": 166.2, "end": 170.6, "text": " Make sure to have a look at the paper in the description as it describes how this method"}, {"start": 170.6, "end": 175.88, "text": " is able to generate more diverse images than previous techniques and how we can measure"}, {"start": 175.88, "end": 179.56, "text": " diversity at all because that is no trivial matter."}, {"start": 179.56, "end": 183.0, "text": " This episode has been supported by weights and biases."}, {"start": 183.0, "end": 187.88, "text": " Wates and biases provides tools to track your experiments in your deep learning projects."}, {"start": 187.88, "end": 193.0, "text": " It is like a shared logbook for your team, and with this, you can compare your own experiment"}, {"start": 193.0, "end": 198.04, "text": " results, put them next to what your colleagues did, and you can discuss your successes and"}, {"start": 198.04, "end": 200.04, "text": " failures much easier."}, {"start": 200.04, "end": 205.8, "text": " It takes less than five minutes to set up and is being used by OpenAI, Toyota Research,"}, {"start": 205.8, "end": 207.64, "text": " Stanford, and Berkeley."}, {"start": 207.64, "end": 214.16, "text": " In fact, it is so easy to add to your project, the CEO himself, Lucas, instrumented it for"}, {"start": 214.16, "end": 220.52, "text": " you for this paper, and if you look here, you can see how the output images and the reconstruction"}, {"start": 220.52, "end": 225.96, "text": " error evolve over time and you can even add your own visualizations."}, {"start": 225.96, "end": 230.28, "text": " It is a site to be held really, so make sure to check it out in the video description,"}, {"start": 230.28, "end": 238.60000000000002, "text": " and if you liked it, visit them through whendb.com slash papers, www.swndb.com slash papers,"}, {"start": 238.60000000000002, "end": 243.24, "text": " or just use the link in the video description and sign up for a free demo today."}, {"start": 243.24, "end": 246.92000000000002, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 246.92, "end": 250.76, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=tRHFQHYfAVc
Better Photorealistic Materials Are Coming!
📝 The paper "An Adaptive Parameterization for Efficient Material Acquisition and Rendering" is available here: https://rgl.epfl.ch/publications/Dupuy2018Adaptive My course on photorealistic rendering is available here: - Playlist: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi - Website: https://users.cg.tuwien.ac.at/zsolnai/gfx/rendering-course/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute Papers with Karo Zonai Fahir. In this series, we often marvel at light simulation programs that are able to create beautiful images by simulating the path of millions and millions of light rays. To make sure that our simulations look lifelike, we not only have to make sure that these rays of light interact with the geometry of the scene in a way that's physically plausible, but the materials within the simulation also have to reflect reality. Now that's an interesting problem. How do we create a convincing mathematical description of real-world materials? Well, one way to do that is taking a measurement device, putting an example of the subject material and measuring how rays of light bounce off of it. This work introduces a new database for sophisticated material models and includes interesting optical effects such as iridescence, which gives the colorful physical appearance of bubbles and fuel water mixtures. It can do colorful mirror-like specular highlights and more so we can include these materials in our light simulation programs. You see this database in action in this scene that showcases a collection of these complex material models. However, creating such a database is not without perils because normally these materials take prohibitively many measurements to reproduce properly and the interesting regions are often found at quite different places. This paper proposes a solution that adapts the location of these measurements to where the action happens, resulting in a mathematical description of these materials that can be measured in a reasonable amount of time. It also takes very little memory when we run the actual light simulation on them. So, as if light transport simulations weren't beautiful enough, they are about to get even more realistic in the near future. Super excited for this. Make sure to have a look at the paper, which is so good, I think I sank into a minor state of shock upon reading it. If you're enjoying learning about light transport, make sure to check out my course on this topic at the Technical University of Vienna. I still teach this at the university for about 20 master students at a time and thought that the teachings shouldn't only be available for a lack of few people who can afford a college education. Clearly, the teachings should be available for everyone, so we recorded it and put it online and now everyone can watch it free of charge. I was quite stunned to see that more than 25,000 people decided to start it, so make sure to give it a go if you're interested. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5, "text": " Dear Fellow Scholars, this is two-minute Papers with Karo Zonai Fahir."}, {"start": 4.5, "end": 9.92, "text": " In this series, we often marvel at light simulation programs that are able to create beautiful"}, {"start": 9.92, "end": 14.72, "text": " images by simulating the path of millions and millions of light rays."}, {"start": 14.72, "end": 19.38, "text": " To make sure that our simulations look lifelike, we not only have to make sure that these"}, {"start": 19.38, "end": 24.82, "text": " rays of light interact with the geometry of the scene in a way that's physically plausible,"}, {"start": 24.82, "end": 29.96, "text": " but the materials within the simulation also have to reflect reality."}, {"start": 29.96, "end": 32.28, "text": " Now that's an interesting problem."}, {"start": 32.28, "end": 37.04, "text": " How do we create a convincing mathematical description of real-world materials?"}, {"start": 37.04, "end": 42.64, "text": " Well, one way to do that is taking a measurement device, putting an example of the subject"}, {"start": 42.64, "end": 47.519999999999996, "text": " material and measuring how rays of light bounce off of it."}, {"start": 47.519999999999996, "end": 53.28, "text": " This work introduces a new database for sophisticated material models and includes interesting"}, {"start": 53.28, "end": 59.120000000000005, "text": " optical effects such as iridescence, which gives the colorful physical appearance of bubbles"}, {"start": 59.12, "end": 63.12, "text": " and fuel water mixtures."}, {"start": 63.12, "end": 68.8, "text": " It can do colorful mirror-like specular highlights and more so we can include these materials"}, {"start": 68.8, "end": 72.72, "text": " in our light simulation programs."}, {"start": 72.72, "end": 77.03999999999999, "text": " You see this database in action in this scene that showcases a collection of these complex"}, {"start": 77.03999999999999, "end": 78.72, "text": " material models."}, {"start": 78.72, "end": 85.03999999999999, "text": " However, creating such a database is not without perils because normally these materials"}, {"start": 85.04, "end": 90.4, "text": " take prohibitively many measurements to reproduce properly and the interesting regions are often"}, {"start": 90.4, "end": 93.04, "text": " found at quite different places."}, {"start": 93.04, "end": 97.60000000000001, "text": " This paper proposes a solution that adapts the location of these measurements to where"}, {"start": 97.60000000000001, "end": 102.60000000000001, "text": " the action happens, resulting in a mathematical description of these materials that can be"}, {"start": 102.60000000000001, "end": 105.52000000000001, "text": " measured in a reasonable amount of time."}, {"start": 105.52000000000001, "end": 110.0, "text": " It also takes very little memory when we run the actual light simulation on them."}, {"start": 110.0, "end": 115.08, "text": " So, as if light transport simulations weren't beautiful enough, they are about to get even"}, {"start": 115.08, "end": 117.44, "text": " more realistic in the near future."}, {"start": 117.44, "end": 119.24, "text": " Super excited for this."}, {"start": 119.24, "end": 124.08, "text": " Make sure to have a look at the paper, which is so good, I think I sank into a minor"}, {"start": 124.08, "end": 126.48, "text": " state of shock upon reading it."}, {"start": 126.48, "end": 131.08, "text": " If you're enjoying learning about light transport, make sure to check out my course on this topic"}, {"start": 131.08, "end": 133.12, "text": " at the Technical University of Vienna."}, {"start": 133.12, "end": 138.68, "text": " I still teach this at the university for about 20 master students at a time and thought"}, {"start": 138.68, "end": 143.88, "text": " that the teachings shouldn't only be available for a lack of few people who can afford a college"}, {"start": 143.88, "end": 144.88, "text": " education."}, {"start": 144.88, "end": 151.36, "text": " Clearly, the teachings should be available for everyone, so we recorded it and put it online"}, {"start": 151.36, "end": 154.6, "text": " and now everyone can watch it free of charge."}, {"start": 154.6, "end": 160.16, "text": " I was quite stunned to see that more than 25,000 people decided to start it, so make sure"}, {"start": 160.16, "end": 162.0, "text": " to give it a go if you're interested."}, {"start": 162.0, "end": 168.92, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=hYWr67i8z5o
AI Discovers Sentiment By Writing Amazon Reviews
❤️ Support the show on Patreon: https://www.patreon.com/TwoMinutePapers ₿ Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 1a5ttKiVQiDcr9j8JT2DoHGzLG7XTJccX › Ethereum: 0xbBD767C0e14be1886c6610bf3F592A91D866d380 › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg 📝 The paper "Learning to Generate Reviews and Discovering Sentiment" is available here: https://openai.com/blog/unsupervised-sentiment-neuron/ https://arxiv.org/abs/1704.01444 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fehir. In 2017, scientists at OpenAI embarked on an AI project where they wanted to show a neural network a bunch of Amazon product reviews and wanted to teach it to be able to generate new ones or continue a review when given one. Now, so far, this sounds like a nice hobby project, definitely not something that would justify an entire video on this channel. However, during this experiment, something really unexpected happened. Now it is clear that when the neural network reads these reviews, it knows that it has to generate new ones, therefore it builds up a deep understanding of language. However, beyond that, it used surprisingly few neurons to continue these reviews and scientists were wondering why is that? Usually, the more neurons, the more powerful the AI can get, so why use so few neurons. The reason for that is that it has learned something really interesting. I'll tell you what in a moment. This neural network was trained in an unsupervised manner, therefore it was told to do what the task was but was given no further supervision, no label datasets, no additional help, nothing. Upon closer inspection, they noticed that the neural network has built up a knowledge of not only language, but also built a sentiment detector as well. This means that the AI recognized that in order to be able to continue a review, it needs to be able to efficiently detect whether the new review seems positive or not. And thus, it dedicated the neuron to this task which we were referred to as the sentiment neuron. However, it was no ordinary sentiment neuron, it was a proper state of the art sentiment detector. In this diagram, you see this neuron at work. As it reads through the review, it starts out detecting a positive outlook which you can see with green and then, uh oh, it detects that the review has taken a turn and is not happy with the movie at all. And all this was learned on a relatively small dataset. Now, if we have this sentiment neuron, we don't just have to sit around and be happy for it. Let's play with it. For instance, by overwriting this sentiment neuron in the neuron at work, we can force it to create positive or negative reviews. Here is a positive example. Quote. Just what I was looking for, nice fitted pants exactly matched seem to color contrast with other pants I own, highly recommended and also very happy. And if we overwrite the sentiment neuron to negative, we get the following. The package received was blank and has no barcode, a waste of time and money. There are some more examples here on the screen for your pleasure. This paper teaches us that we should endeavor to not just accept these AI-based solutions but look under the hood and sometimes a gold mine of knowledge can be found within. Absolutely amazing. If you have enjoyed this episode and would like to help us make better videos for you in the future, please consider supporting us on patreon.com slash 2 minute papers or just click the link in the video description. In return, we can offer you early access to these episodes or even add your name to our key supporters so you can appear in the description of every video and more. We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin. The majority of these fans is used to improve the show and we use a smaller part to give back to the community and empower science conferences like the Central European Conference on Computer Graphics. This is a conference that teaches young scientists to present their work at bigger venues later and with your support it's now the second year we've been able to sponsor them which warms my heart. This is why every episode ends with you know the drill. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 11.200000000000001, "text": " In 2017, scientists at OpenAI embarked on an AI project where they wanted to show a"}, {"start": 11.200000000000001, "end": 17.2, "text": " neural network a bunch of Amazon product reviews and wanted to teach it to be able to generate"}, {"start": 17.2, "end": 21.080000000000002, "text": " new ones or continue a review when given one."}, {"start": 21.080000000000002, "end": 26.560000000000002, "text": " Now, so far, this sounds like a nice hobby project, definitely not something that would"}, {"start": 26.560000000000002, "end": 29.36, "text": " justify an entire video on this channel."}, {"start": 29.36, "end": 34.56, "text": " However, during this experiment, something really unexpected happened."}, {"start": 34.56, "end": 39.4, "text": " Now it is clear that when the neural network reads these reviews, it knows that it has"}, {"start": 39.4, "end": 44.68, "text": " to generate new ones, therefore it builds up a deep understanding of language."}, {"start": 44.68, "end": 50.28, "text": " However, beyond that, it used surprisingly few neurons to continue these reviews and"}, {"start": 50.28, "end": 53.64, "text": " scientists were wondering why is that?"}, {"start": 53.64, "end": 60.32, "text": " Usually, the more neurons, the more powerful the AI can get, so why use so few neurons."}, {"start": 60.32, "end": 64.04, "text": " The reason for that is that it has learned something really interesting."}, {"start": 64.04, "end": 65.96000000000001, "text": " I'll tell you what in a moment."}, {"start": 65.96000000000001, "end": 71.16, "text": " This neural network was trained in an unsupervised manner, therefore it was told to do what"}, {"start": 71.16, "end": 78.72, "text": " the task was but was given no further supervision, no label datasets, no additional help, nothing."}, {"start": 78.72, "end": 83.52, "text": " Upon closer inspection, they noticed that the neural network has built up a knowledge"}, {"start": 83.52, "end": 88.47999999999999, "text": " of not only language, but also built a sentiment detector as well."}, {"start": 88.47999999999999, "end": 93.28, "text": " This means that the AI recognized that in order to be able to continue a review, it needs"}, {"start": 93.28, "end": 98.56, "text": " to be able to efficiently detect whether the new review seems positive or not."}, {"start": 98.56, "end": 103.56, "text": " And thus, it dedicated the neuron to this task which we were referred to as the sentiment"}, {"start": 103.56, "end": 104.56, "text": " neuron."}, {"start": 104.56, "end": 110.32, "text": " However, it was no ordinary sentiment neuron, it was a proper state of the art sentiment"}, {"start": 110.32, "end": 111.44, "text": " detector."}, {"start": 111.44, "end": 114.16, "text": " In this diagram, you see this neuron at work."}, {"start": 114.16, "end": 118.67999999999999, "text": " As it reads through the review, it starts out detecting a positive outlook which you"}, {"start": 118.67999999999999, "end": 125.67999999999999, "text": " can see with green and then, uh oh, it detects that the review has taken a turn and is not"}, {"start": 125.67999999999999, "end": 127.68, "text": " happy with the movie at all."}, {"start": 127.68, "end": 131.16, "text": " And all this was learned on a relatively small dataset."}, {"start": 131.16, "end": 135.96, "text": " Now, if we have this sentiment neuron, we don't just have to sit around and be happy for"}, {"start": 135.96, "end": 136.96, "text": " it."}, {"start": 136.96, "end": 137.96, "text": " Let's play with it."}, {"start": 137.96, "end": 143.0, "text": " For instance, by overwriting this sentiment neuron in the neuron at work, we can force"}, {"start": 143.0, "end": 146.68, "text": " it to create positive or negative reviews."}, {"start": 146.68, "end": 148.52, "text": " Here is a positive example."}, {"start": 148.52, "end": 149.52, "text": " Quote."}, {"start": 149.52, "end": 155.04000000000002, "text": " Just what I was looking for, nice fitted pants exactly matched seem to color contrast with"}, {"start": 155.04000000000002, "end": 159.92000000000002, "text": " other pants I own, highly recommended and also very happy."}, {"start": 159.92000000000002, "end": 165.32, "text": " And if we overwrite the sentiment neuron to negative, we get the following."}, {"start": 165.32, "end": 170.51999999999998, "text": " The package received was blank and has no barcode, a waste of time and money."}, {"start": 170.51999999999998, "end": 173.88, "text": " There are some more examples here on the screen for your pleasure."}, {"start": 173.88, "end": 179.23999999999998, "text": " This paper teaches us that we should endeavor to not just accept these AI-based solutions"}, {"start": 179.23999999999998, "end": 184.84, "text": " but look under the hood and sometimes a gold mine of knowledge can be found within."}, {"start": 184.84, "end": 186.64, "text": " Absolutely amazing."}, {"start": 186.64, "end": 190.16, "text": " If you have enjoyed this episode and would like to help us make better videos for you"}, {"start": 190.16, "end": 197.12, "text": " in the future, please consider supporting us on patreon.com slash 2 minute papers or just"}, {"start": 197.12, "end": 199.24, "text": " click the link in the video description."}, {"start": 199.24, "end": 205.07999999999998, "text": " In return, we can offer you early access to these episodes or even add your name to our"}, {"start": 205.07999999999998, "end": 210.64, "text": " key supporters so you can appear in the description of every video and more."}, {"start": 210.64, "end": 215.07999999999998, "text": " We also support cryptocurrencies like Bitcoin, Ethereum and Litecoin."}, {"start": 215.08, "end": 220.12, "text": " The majority of these fans is used to improve the show and we use a smaller part to give"}, {"start": 220.12, "end": 225.88000000000002, "text": " back to the community and empower science conferences like the Central European Conference on"}, {"start": 225.88000000000002, "end": 227.36, "text": " Computer Graphics."}, {"start": 227.36, "end": 232.52, "text": " This is a conference that teaches young scientists to present their work at bigger venues later"}, {"start": 232.52, "end": 237.04000000000002, "text": " and with your support it's now the second year we've been able to sponsor them which"}, {"start": 237.04000000000002, "end": 238.36, "text": " warms my heart."}, {"start": 238.36, "end": 241.36, "text": " This is why every episode ends with you know the drill."}, {"start": 241.36, "end": 245.32000000000002, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=kie4wjB1MCw
Virtual Characters Learn To Work Out…and Undergo Surgery 💪
📝 The paper "Scalable Muscle-actuated Human Simulation and Control" is available here: http://mrl.snu.ac.kr/research/ProjectScalable/Page.htm ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fehir. This work is about creating virtual characters with a skeletal system, adding more than 300 muscles and teaching them to use these muscles to kick, jump, move around and perform other realistic human movements. Throughout this video, you will see the activated muscles with red. I am loving the idea, which turns out comes with lots of really interesting corollaries. For instance, this simulation realistically portrays how increasing the amount of weight to be lifted, changes what muscles are being trained during a workout. These agents also learned to jump really high, and you can see a drastic difference between the movement required for a mediocre jump and an amazing one. As we are teaching these virtual agents with an assimilation, we can perform all kinds of crazy experiments by giving them horrendous special conditions, such as bone deformities, a stiff ankle, muscle deficiencies, and watch them learn to walk despite these setbacks. For instance, here you see that the muscles in the left thigh are contracted, resulting in a stiff knee, and as a result, the agent learned an asymmetric gate. If the thigh bones are twisted inwards, ouch. The AI shows that it is still possible to control the muscles to walk in a stable manner. I don't know about you, but at this point I'm feeling quite sorry for these poor simulated beings, so let's move on. We have plenty of less gruesome, but equally interesting things to test here. In fact, if we are in a simulation, why not take it further? It doesn't cost anything. That's exactly what the authors did, and it turns out that we can even simulate the use of prosthetics. However, since we don't need to manufacture these prosthetics, we can try a large number of different designs and evaluate their usefulness without paying a dime. How cool is that? So far, we have hamstrung this poor character many, many times, so why not try to heal it? With this technique, we can also quickly test the effect of different kinds of surgeries on the movement of the patient. With this, you can see here how a hamstring surgery can extend the range of motion of this skeleton. It also tells us not to try our luck with one leg squats. You heard it here, folks. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.14, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fehir."}, {"start": 4.14, "end": 8.78, "text": " This work is about creating virtual characters with a skeletal system,"}, {"start": 8.78, "end": 14.120000000000001, "text": " adding more than 300 muscles and teaching them to use these muscles to kick,"}, {"start": 14.120000000000001, "end": 19.04, "text": " jump, move around and perform other realistic human movements."}, {"start": 19.04, "end": 23.0, "text": " Throughout this video, you will see the activated muscles with red."}, {"start": 23.0, "end": 29.66, "text": " I am loving the idea, which turns out comes with lots of really interesting corollaries."}, {"start": 29.66, "end": 36.32, "text": " For instance, this simulation realistically portrays how increasing the amount of weight to be lifted,"}, {"start": 36.32, "end": 40.46, "text": " changes what muscles are being trained during a workout."}, {"start": 40.46, "end": 44.0, "text": " These agents also learned to jump really high,"}, {"start": 44.0, "end": 51.1, "text": " and you can see a drastic difference between the movement required for a mediocre jump and an amazing one."}, {"start": 51.1, "end": 54.5, "text": " As we are teaching these virtual agents with an assimilation,"}, {"start": 54.5, "end": 60.36, "text": " we can perform all kinds of crazy experiments by giving them horrendous special conditions,"}, {"start": 60.36, "end": 65.24, "text": " such as bone deformities, a stiff ankle, muscle deficiencies,"}, {"start": 65.24, "end": 68.84, "text": " and watch them learn to walk despite these setbacks."}, {"start": 68.84, "end": 73.66, "text": " For instance, here you see that the muscles in the left thigh are contracted,"}, {"start": 73.66, "end": 79.64, "text": " resulting in a stiff knee, and as a result, the agent learned an asymmetric gate."}, {"start": 79.64, "end": 83.36, "text": " If the thigh bones are twisted inwards, ouch."}, {"start": 83.36, "end": 89.46, "text": " The AI shows that it is still possible to control the muscles to walk in a stable manner."}, {"start": 89.46, "end": 95.26, "text": " I don't know about you, but at this point I'm feeling quite sorry for these poor simulated beings,"}, {"start": 95.26, "end": 101.26, "text": " so let's move on. We have plenty of less gruesome, but equally interesting things to test here."}, {"start": 101.26, "end": 107.26, "text": " In fact, if we are in a simulation, why not take it further? It doesn't cost anything."}, {"start": 107.26, "end": 114.06, "text": " That's exactly what the authors did, and it turns out that we can even simulate the use of prosthetics."}, {"start": 114.06, "end": 117.46000000000001, "text": " However, since we don't need to manufacture these prosthetics,"}, {"start": 117.46000000000001, "end": 123.86000000000001, "text": " we can try a large number of different designs and evaluate their usefulness without paying a dime."}, {"start": 123.86000000000001, "end": 125.86000000000001, "text": " How cool is that?"}, {"start": 125.86000000000001, "end": 132.46, "text": " So far, we have hamstrung this poor character many, many times, so why not try to heal it?"}, {"start": 132.46, "end": 139.96, "text": " With this technique, we can also quickly test the effect of different kinds of surgeries on the movement of the patient."}, {"start": 139.96, "end": 147.46, "text": " With this, you can see here how a hamstring surgery can extend the range of motion of this skeleton."}, {"start": 154.66, "end": 159.06, "text": " It also tells us not to try our luck with one leg squats."}, {"start": 159.06, "end": 165.06, "text": " You heard it here, folks. Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=2xWnOL5bts8
Rewrite Videos By Editing Text
📝 The paper "Text-based Editing of Talking-head Video" is available here: https://www.ohadf.com/projects/text-based-editing/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepFake
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. The last few years have been an amazing ride when it comes to research works for creating facial reenactments for real characters. Beyond just transferring our gestures to a video footage of an existing talking head, controlling their gestures like video game characters, and full body movement transfer are also a possibility. In WaveNet and its many variants, we can also learn someone's way of speaking, write a piece of text, and make an audio waveform where we can impersonate them using their own voice. So, what else is there to do in this domain? Are we done? No, no, not at all. Hold onto your papers because with this amazing new technique, what we can do is look at the transcript of a talking head video, remove parts of it, or add to it, just as we would edit any piece of text, and this technique produces both the audio and the matching video of this person uttering these words. Check this out. With Apple's stock price at $191.45 per share. It works by looking through the video, collecting small sounds that can be used to piece together this new word that we've added to the transcript. The authors demonstrate this by adding the word FOX to the transcript. This can be piece together by the V, which appears in the word VIPER, and taking OX as a part of another word found in the footage. As a result, one can make the character say FOX without ever hearing her uttering this word before. Then we can look for not only the audio occurrences for these sounds, but the video footage of how they are being said, and in the paper, a technique is proposed to blend these video assets together. Finally, we can provide all this information to a neural renderer that synthesizes a smooth video of this talking head. This is a beautiful architecture with lots of contributions, so make sure to have a look at the paper in the description for more details. And of course, as it is not easy to measure the quality of these results in a mathematical manner, a user study was made where they asked some fellow humans, which is the real footage, and which one was edited. You will see the footage edited by this algorithm on the right. And it's not easy to tell which one is which, and it also shows in the numbers, which are not perfect, but they clearly show that the fake video is very often confused with the real one. Did you find any artifacts that give the trick away? Perhaps the sentence was said a touch faster than expected. Found anything else? Let me know in the comments below. The paper also contains tons of comparisons against previous works. So, in the last few years, the trend seems clear. The bar is getting lower, it is getting easier and easier to produce these kinds of videos, and it is getting harder and harder to catch them with our naked eyes, and now we can edit the transcript of what is being said, which is super convenient. I would like to note that AIs also exist that can detect these edited videos with a high confidence. I put up the ethical considerations of the authors here, it is definitely worthy of your attention as it discusses how they think about these techniques. The motivation for this work was mainly to enhance digital storytelling by removing filler words, potentially flabbed phrases, or retiming sentences in talking head videos. There is so much more to it, so make sure to pause the video and read their full statement. Thanks for watching and for your general support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir."}, {"start": 4.28, "end": 8.700000000000001, "text": " The last few years have been an amazing ride when it comes to research works for creating"}, {"start": 8.700000000000001, "end": 11.8, "text": " facial reenactments for real characters."}, {"start": 11.8, "end": 16.96, "text": " Beyond just transferring our gestures to a video footage of an existing talking head,"}, {"start": 16.96, "end": 23.32, "text": " controlling their gestures like video game characters, and full body movement transfer are also"}, {"start": 23.32, "end": 24.92, "text": " a possibility."}, {"start": 24.92, "end": 30.62, "text": " In WaveNet and its many variants, we can also learn someone's way of speaking, write a"}, {"start": 30.62, "end": 36.34, "text": " piece of text, and make an audio waveform where we can impersonate them using their own"}, {"start": 36.34, "end": 37.34, "text": " voice."}, {"start": 37.34, "end": 40.480000000000004, "text": " So, what else is there to do in this domain?"}, {"start": 40.480000000000004, "end": 41.480000000000004, "text": " Are we done?"}, {"start": 41.480000000000004, "end": 43.88, "text": " No, no, not at all."}, {"start": 43.88, "end": 48.64, "text": " Hold onto your papers because with this amazing new technique, what we can do is look at"}, {"start": 48.64, "end": 54.96, "text": " the transcript of a talking head video, remove parts of it, or add to it, just as we would"}, {"start": 54.96, "end": 61.4, "text": " edit any piece of text, and this technique produces both the audio and the matching video"}, {"start": 61.4, "end": 64.2, "text": " of this person uttering these words."}, {"start": 64.2, "end": 65.2, "text": " Check this out."}, {"start": 65.2, "end": 70.2, "text": " With Apple's stock price at $191.45 per share."}, {"start": 70.2, "end": 85.68, "text": " It works by looking through the video, collecting small sounds that can be used to piece together"}, {"start": 85.68, "end": 88.92, "text": " this new word that we've added to the transcript."}, {"start": 88.92, "end": 93.76, "text": " The authors demonstrate this by adding the word FOX to the transcript."}, {"start": 93.76, "end": 100.4, "text": " This can be piece together by the V, which appears in the word VIPER, and taking OX as a part"}, {"start": 100.4, "end": 103.12, "text": " of another word found in the footage."}, {"start": 103.12, "end": 108.44, "text": " As a result, one can make the character say FOX without ever hearing her uttering this"}, {"start": 108.44, "end": 110.08000000000001, "text": " word before."}, {"start": 110.08000000000001, "end": 116.08000000000001, "text": " Then we can look for not only the audio occurrences for these sounds, but the video footage of how"}, {"start": 116.08000000000001, "end": 121.12, "text": " they are being said, and in the paper, a technique is proposed to blend these video"}, {"start": 121.12, "end": 122.60000000000001, "text": " assets together."}, {"start": 122.6, "end": 128.64, "text": " Finally, we can provide all this information to a neural renderer that synthesizes a smooth"}, {"start": 128.64, "end": 131.0, "text": " video of this talking head."}, {"start": 131.0, "end": 135.51999999999998, "text": " This is a beautiful architecture with lots of contributions, so make sure to have a look"}, {"start": 135.51999999999998, "end": 138.68, "text": " at the paper in the description for more details."}, {"start": 138.68, "end": 143.4, "text": " And of course, as it is not easy to measure the quality of these results in a mathematical"}, {"start": 143.4, "end": 149.76, "text": " manner, a user study was made where they asked some fellow humans, which is the real footage,"}, {"start": 149.76, "end": 151.51999999999998, "text": " and which one was edited."}, {"start": 151.52, "end": 158.52, "text": " You will see the footage edited by this algorithm on the right."}, {"start": 158.52, "end": 171.76000000000002, "text": " And it's not easy to tell which one is which, and it also shows in the numbers, which"}, {"start": 171.76000000000002, "end": 177.32000000000002, "text": " are not perfect, but they clearly show that the fake video is very often confused with"}, {"start": 177.32000000000002, "end": 178.64000000000001, "text": " the real one."}, {"start": 178.64, "end": 181.79999999999998, "text": " Did you find any artifacts that give the trick away?"}, {"start": 181.79999999999998, "end": 186.2, "text": " Perhaps the sentence was said a touch faster than expected."}, {"start": 186.2, "end": 187.2, "text": " Found anything else?"}, {"start": 187.2, "end": 189.0, "text": " Let me know in the comments below."}, {"start": 189.0, "end": 193.48, "text": " The paper also contains tons of comparisons against previous works."}, {"start": 193.48, "end": 197.39999999999998, "text": " So, in the last few years, the trend seems clear."}, {"start": 197.39999999999998, "end": 203.23999999999998, "text": " The bar is getting lower, it is getting easier and easier to produce these kinds of videos,"}, {"start": 203.23999999999998, "end": 208.6, "text": " and it is getting harder and harder to catch them with our naked eyes, and now we can"}, {"start": 208.6, "end": 213.16, "text": " edit the transcript of what is being said, which is super convenient."}, {"start": 213.16, "end": 218.68, "text": " I would like to note that AIs also exist that can detect these edited videos with a high"}, {"start": 218.68, "end": 219.68, "text": " confidence."}, {"start": 219.68, "end": 224.44, "text": " I put up the ethical considerations of the authors here, it is definitely worthy of your"}, {"start": 224.44, "end": 228.56, "text": " attention as it discusses how they think about these techniques."}, {"start": 228.56, "end": 234.16, "text": " The motivation for this work was mainly to enhance digital storytelling by removing filler"}, {"start": 234.16, "end": 240.35999999999999, "text": " words, potentially flabbed phrases, or retiming sentences in talking head videos."}, {"start": 240.35999999999999, "end": 245.32, "text": " There is so much more to it, so make sure to pause the video and read their full statement."}, {"start": 245.32, "end": 273.32, "text": " Thanks for watching and for your general support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=9M18rc9-VWU
This Jello Simulation Uses Only ~88 Lines of Code
📝 The paper "Moving Least Squares MPM with Compatible Particle-in-Cell" and its source code is available here: http://taichi.graphics/wp-content/uploads/2019/03/mls-mpm-cpic.pdf https://github.com/yuanming-hu/taichi_mpm The Taichi framework: http://taichi.graphics/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers ₿ Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 1a5ttKiVQiDcr9j8JT2DoHGzLG7XTJccX › Ethereum: 0xbBD767C0e14be1886c6610bf3F592A91D866d380 › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Kato Joanaifahir. Today we are going to talk about the material point method. This method uses both grids and particles to simulate the movement of snow, dripping honey, interactions of granular solids, and a lot of other really cool phenomena on our computers. This can be used, for instance, in the movie industry, to simulate what a city would look like if it were flooded. However, it has its own limitations, which you will hear more about in a moment. This paper showcases really cool improvements to this technique. For instance, it enables to run these simulations twice as fast and can simulate new phenomena that were previously not supported by the material point method. One is the simulation of complex, thin boundaries that enables us to cut things so in this video expect lots of virtual characters to get dismembered. I think this might be the only channel on YouTube where we can say this and celebrate it as an amazing scientific discovery. And the other key improvement of this paper is introducing two-way coupling, which means the example that you see here as the water changes the movement of the wheel, but the wheel also changes with the movement of the water. It is also demonstrated quite aptly here by this elastoplastic jello scene in which we can throw in a bunch of blocks of different densities and it is simulated beautifully here how they sink into the jello deeper and deeper as a result. Here you see a real robot running around in a granular medium. And here we have a simulation of the same phenomenon and can marvel at how close the result is to what would happen in real life. Another selling point of this method is that this is easy to implement, which is demonstrated here and what you see here is the essence of this algorithm implemented in 88 lines of code. Wow! Now these methods still take a while as there is a lot of deformations and movements to compute and we can only advance time in very small steps and as a result the speed of such simulations is measured nothing frames per second but in seconds per frame. These are the kinds of simulations that we like to leave on the machine overnight. If you want to see something that is done with a remarkable amount of love and care, please read this paper. And I don't know if you have heard about this framework called Tai Chi. This contains implementations for many amazing papers in computer graphics. Lots of paper implementations on animation, light transport simulations, you name it, a total of more than 40 papers are implemented there. And I was thinking this is really amazing. I wonder which group made this. Then I noticed it was written by one person and that person is Yuan Ming Hu, the scientist who is the lead author of this paper. This is insanity. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Kato Joanaifahir."}, {"start": 4.4, "end": 8.040000000000001, "text": " Today we are going to talk about the material point method."}, {"start": 8.040000000000001, "end": 13.84, "text": " This method uses both grids and particles to simulate the movement of snow, dripping"}, {"start": 13.84, "end": 19.68, "text": " honey, interactions of granular solids, and a lot of other really cool phenomena on our"}, {"start": 19.68, "end": 20.8, "text": " computers."}, {"start": 20.8, "end": 24.92, "text": " This can be used, for instance, in the movie industry, to simulate what a city would"}, {"start": 24.92, "end": 26.72, "text": " look like if it were flooded."}, {"start": 26.72, "end": 31.799999999999997, "text": " However, it has its own limitations, which you will hear more about in a moment."}, {"start": 31.799999999999997, "end": 35.6, "text": " This paper showcases really cool improvements to this technique."}, {"start": 35.6, "end": 41.76, "text": " For instance, it enables to run these simulations twice as fast and can simulate new phenomena"}, {"start": 41.76, "end": 45.56, "text": " that were previously not supported by the material point method."}, {"start": 45.56, "end": 51.76, "text": " One is the simulation of complex, thin boundaries that enables us to cut things so in this video"}, {"start": 51.76, "end": 55.28, "text": " expect lots of virtual characters to get dismembered."}, {"start": 55.28, "end": 60.68, "text": " I think this might be the only channel on YouTube where we can say this and celebrate it as"}, {"start": 60.68, "end": 63.04, "text": " an amazing scientific discovery."}, {"start": 63.04, "end": 68.28, "text": " And the other key improvement of this paper is introducing two-way coupling, which means"}, {"start": 68.28, "end": 73.92, "text": " the example that you see here as the water changes the movement of the wheel, but the wheel"}, {"start": 73.92, "end": 76.88, "text": " also changes with the movement of the water."}, {"start": 76.88, "end": 82.32, "text": " It is also demonstrated quite aptly here by this elastoplastic jello scene in which we"}, {"start": 82.32, "end": 86.8, "text": " can throw in a bunch of blocks of different densities and it is simulated beautifully"}, {"start": 86.8, "end": 92.6, "text": " here how they sink into the jello deeper and deeper as a result."}, {"start": 92.6, "end": 100.03999999999999, "text": " Here you see a real robot running around in a granular medium."}, {"start": 100.03999999999999, "end": 105.88, "text": " And here we have a simulation of the same phenomenon and can marvel at how close the"}, {"start": 105.88, "end": 109.63999999999999, "text": " result is to what would happen in real life."}, {"start": 109.64, "end": 114.16, "text": " Another selling point of this method is that this is easy to implement, which is demonstrated"}, {"start": 114.16, "end": 120.92, "text": " here and what you see here is the essence of this algorithm implemented in 88 lines of"}, {"start": 120.92, "end": 122.24, "text": " code."}, {"start": 122.24, "end": 125.48, "text": " Wow!"}, {"start": 125.48, "end": 130.08, "text": " Now these methods still take a while as there is a lot of deformations and movements to"}, {"start": 130.08, "end": 136.52, "text": " compute and we can only advance time in very small steps and as a result the speed of such"}, {"start": 136.52, "end": 142.12, "text": " simulations is measured nothing frames per second but in seconds per frame."}, {"start": 142.12, "end": 146.44, "text": " These are the kinds of simulations that we like to leave on the machine overnight."}, {"start": 146.44, "end": 150.96, "text": " If you want to see something that is done with a remarkable amount of love and care,"}, {"start": 150.96, "end": 153.12, "text": " please read this paper."}, {"start": 153.12, "end": 156.96, "text": " And I don't know if you have heard about this framework called Tai Chi."}, {"start": 156.96, "end": 161.72, "text": " This contains implementations for many amazing papers in computer graphics."}, {"start": 161.72, "end": 166.84, "text": " Lots of paper implementations on animation, light transport simulations, you name it,"}, {"start": 166.84, "end": 170.72, "text": " a total of more than 40 papers are implemented there."}, {"start": 170.72, "end": 173.88, "text": " And I was thinking this is really amazing."}, {"start": 173.88, "end": 176.52, "text": " I wonder which group made this."}, {"start": 176.52, "end": 183.24, "text": " Then I noticed it was written by one person and that person is Yuan Ming Hu, the scientist"}, {"start": 183.24, "end": 185.84, "text": " who is the lead author of this paper."}, {"start": 185.84, "end": 187.84, "text": " This is insanity."}, {"start": 187.84, "end": 191.68, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=thQ7QjqNPlY
This AI Makes The Mona Lisa Come To Life
❤️ Check out Weights & Biases here and sign up for a free demo: https://www.wandb.com/papers 📝 The paper "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" is available here: https://arxiv.org/abs/1905.08233v1 https://www.youtube.com/watch?v=p1b5aiTrGzY 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Instagram: https://www.instagram.com/twominutepapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepFake
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonai Fahir. This work presents a learning-based method that is able to take just a handful of photos and use those to synthesize a moving virtual character. Not only that, but it can also synthesize these faces from new viewpoints that the AI hasn't seen before. These results are truly sublime, however, hold on to your papers because it also works from as little as just one input image. This will refer to as one shot learning. You see some examples here, but wait a second, really, just one image? If all it needs is just one photo, this means that we can use famous photographs and even paintings and synthesize animations for them. Look at that. Of course, if we show multiple photos to the AI, it is able to synthesize better output results. You see such a progression here as a function of the amount of input data. The painting part I find to be particularly cool because it straights away from the kind of data the neural networks were trained on, which is photos. However, if we have proper intelligence, the AI can learn how different parts of the human face move and generalize this knowledge to paintings as well. The underlying laws are the same, only the style of the output is different. Absolutely amazing. The paper also showcases an extensive comparison section to previous works, and as you see here, nothing really compares to this kind of quality. I have heard the quote, any sufficiently advanced technology is indistinguishable from magic so many times in my life, and I was like, okay, well, maybe, but I'm telling you, this is one of those times when I really felt that I am seeing magic at work on my computer screen. So, I know what you're thinking. How can all this wizardry be done? This paper proposes a novel architecture where three neural networks work together. One, the M-better takes colored images with landmark information and compresses it down into the essence of these images. Two, the generator takes a set of landmarks, a crude approximation of the human face, and synthesizes a photorealistic result from it. And three, the discriminator looks at both real and fake images and tries to learn how to tell them apart. As a result, these networks learn together and over time, they improve together. So much so that they can create these amazing animations from just one source photo. The authors also released a statement on the purpose and effects of this technology, which I'll leave here for a few seconds for our interested viewers. This work was partly done at the Samsung AI Lab and Skoltec. Congratulations to both institutions, killer paper. Make sure to check it out in the video description. This episode has been supported by weights and biases. Weight and biases provides tools to track your experiments in your deep learning projects. It is like a shared logbook for your team, and with this, you can compare your own experiment results, put them next to what your colleagues did, and you can discuss your successes and failures much easier. It takes less than five minutes to set up and is being used by OpenAI, Toyota Research, Stanford, and Berkeley. It was also used in this OpenAI project that you see here, which we covered earlier in the series. We reported that experiment tracking was crucial in this project and that these tools saved them quite a bit of time and money. If only I had access to such a tool during our last research project where I had to compare the performance of neural networks for months and months. Well, it turns out I will be able to get access to these tools because get this, it's free and will always be free for academics and open source projects. Make sure to visit them through WNDB.com or just click the link in the video description and sign up for a free demo today. Our thanks to weights and biases for helping us make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonai Fahir."}, {"start": 4.5600000000000005, "end": 10.3, "text": " This work presents a learning-based method that is able to take just a handful of photos"}, {"start": 10.3, "end": 14.9, "text": " and use those to synthesize a moving virtual character."}, {"start": 14.9, "end": 20.28, "text": " Not only that, but it can also synthesize these faces from new viewpoints that the AI"}, {"start": 20.28, "end": 22.080000000000002, "text": " hasn't seen before."}, {"start": 22.080000000000002, "end": 27.6, "text": " These results are truly sublime, however, hold on to your papers because it also works"}, {"start": 27.6, "end": 31.6, "text": " from as little as just one input image."}, {"start": 31.6, "end": 34.0, "text": " This will refer to as one shot learning."}, {"start": 34.0, "end": 40.36, "text": " You see some examples here, but wait a second, really, just one image?"}, {"start": 40.36, "end": 46.040000000000006, "text": " If all it needs is just one photo, this means that we can use famous photographs and even"}, {"start": 46.040000000000006, "end": 49.8, "text": " paintings and synthesize animations for them."}, {"start": 49.8, "end": 51.64, "text": " Look at that."}, {"start": 51.64, "end": 56.8, "text": " Of course, if we show multiple photos to the AI, it is able to synthesize better output"}, {"start": 56.8, "end": 57.8, "text": " results."}, {"start": 57.8, "end": 62.76, "text": " You see such a progression here as a function of the amount of input data."}, {"start": 62.76, "end": 67.44, "text": " The painting part I find to be particularly cool because it straights away from the"}, {"start": 67.44, "end": 71.75999999999999, "text": " kind of data the neural networks were trained on, which is photos."}, {"start": 71.75999999999999, "end": 77.28, "text": " However, if we have proper intelligence, the AI can learn how different parts of the"}, {"start": 77.28, "end": 82.2, "text": " human face move and generalize this knowledge to paintings as well."}, {"start": 82.2, "end": 87.72, "text": " The underlying laws are the same, only the style of the output is different."}, {"start": 87.72, "end": 88.88, "text": " Absolutely amazing."}, {"start": 88.88, "end": 95.84, "text": " The paper also showcases an extensive comparison section to previous works, and as you see here,"}, {"start": 95.84, "end": 98.80000000000001, "text": " nothing really compares to this kind of quality."}, {"start": 98.80000000000001, "end": 104.32000000000001, "text": " I have heard the quote, any sufficiently advanced technology is indistinguishable from magic"}, {"start": 104.32000000000001, "end": 111.2, "text": " so many times in my life, and I was like, okay, well, maybe, but I'm telling you, this"}, {"start": 111.2, "end": 116.28, "text": " is one of those times when I really felt that I am seeing magic at work on my computer"}, {"start": 116.28, "end": 117.28, "text": " screen."}, {"start": 117.28, "end": 119.72, "text": " So, I know what you're thinking."}, {"start": 119.72, "end": 122.52000000000001, "text": " How can all this wizardry be done?"}, {"start": 122.52000000000001, "end": 127.96000000000001, "text": " This paper proposes a novel architecture where three neural networks work together."}, {"start": 127.96000000000001, "end": 134.08, "text": " One, the M-better takes colored images with landmark information and compresses it down"}, {"start": 134.08, "end": 136.44, "text": " into the essence of these images."}, {"start": 136.44, "end": 142.96, "text": " Two, the generator takes a set of landmarks, a crude approximation of the human face, and"}, {"start": 142.96, "end": 146.35999999999999, "text": " synthesizes a photorealistic result from it."}, {"start": 146.35999999999999, "end": 152.28, "text": " And three, the discriminator looks at both real and fake images and tries to learn how"}, {"start": 152.28, "end": 154.04, "text": " to tell them apart."}, {"start": 154.04, "end": 159.96, "text": " As a result, these networks learn together and over time, they improve together."}, {"start": 159.96, "end": 165.92, "text": " So much so that they can create these amazing animations from just one source photo."}, {"start": 165.92, "end": 171.0, "text": " The authors also released a statement on the purpose and effects of this technology, which"}, {"start": 171.0, "end": 174.79999999999998, "text": " I'll leave here for a few seconds for our interested viewers."}, {"start": 174.79999999999998, "end": 179.44, "text": " This work was partly done at the Samsung AI Lab and Skoltec."}, {"start": 179.44, "end": 183.2, "text": " Congratulations to both institutions, killer paper."}, {"start": 183.2, "end": 185.83999999999997, "text": " Make sure to check it out in the video description."}, {"start": 185.83999999999997, "end": 189.39999999999998, "text": " This episode has been supported by weights and biases."}, {"start": 189.39999999999998, "end": 194.6, "text": " Weight and biases provides tools to track your experiments in your deep learning projects."}, {"start": 194.6, "end": 199.84, "text": " It is like a shared logbook for your team, and with this, you can compare your own experiment"}, {"start": 199.84, "end": 204.84, "text": " results, put them next to what your colleagues did, and you can discuss your successes and"}, {"start": 204.84, "end": 206.95999999999998, "text": " failures much easier."}, {"start": 206.95999999999998, "end": 213.24, "text": " It takes less than five minutes to set up and is being used by OpenAI, Toyota Research,"}, {"start": 213.24, "end": 215.12, "text": " Stanford, and Berkeley."}, {"start": 215.12, "end": 219.51999999999998, "text": " It was also used in this OpenAI project that you see here, which we covered earlier"}, {"start": 219.51999999999998, "end": 220.68, "text": " in the series."}, {"start": 220.68, "end": 225.84, "text": " We reported that experiment tracking was crucial in this project and that these tools saved"}, {"start": 225.84, "end": 228.44, "text": " them quite a bit of time and money."}, {"start": 228.44, "end": 233.6, "text": " If only I had access to such a tool during our last research project where I had to compare"}, {"start": 233.6, "end": 236.92000000000002, "text": " the performance of neural networks for months and months."}, {"start": 236.92000000000002, "end": 243.04000000000002, "text": " Well, it turns out I will be able to get access to these tools because get this, it's free"}, {"start": 243.04000000000002, "end": 247.44, "text": " and will always be free for academics and open source projects."}, {"start": 247.44, "end": 254.84, "text": " Make sure to visit them through WNDB.com or just click the link in the video description"}, {"start": 254.84, "end": 257.24, "text": " and sign up for a free demo today."}, {"start": 257.24, "end": 261.28, "text": " Our thanks to weights and biases for helping us make better videos for you."}, {"start": 261.28, "end": 288.03999999999996, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=S7HlxaMmWAU
Artistic Style Transfer, Now in 3D!
📝 The paper "Fast Example-Based Stylization with Local Guidance" is available here: https://dcgi.fel.cvut.cz/home/sykorad/styleblit.html ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #StyleTransfer
Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir. Style Transfer is an interesting problem in machine learning research where we have two input images, one for content and one for style, and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to super fun and really good looking results. We have seen plenty of papers doing variations on style transfer but can we push this concept further? And the answer is yes. For instance, few people know that style transfer can also be done in 3D. If you look here, you see an artist performing this style transfer by drawing on a simple sphere and get their artistic style to carry over to a complicated piece of 3D geometry. We talked about this technique in 2-minute papers episode 94 and for your reference, we are currently at over episode 340. Leave a comment if you've been around back then. And this previous technique led to truly amazing results but still had two weak points. One, it took too long. As you see here, this method took around a minute or more to produce these results. And hold on to your papers because this new paper is approximately a thousand times faster than that, which means that it can produce a hundred frames per second using a whopping 4K resolution. But of course, none of this matters if the visual quality is not similar. And if you look closely, you see that the new results are indeed really close to the reference results of the older method. So, what was the other problem? The other problem was the lack of temporal coherence. This means that when creating an animation, it seems like each of the individual frames of the animation were drawn separately by an artist. In this new work, this is not only eliminated, as you see here, but the new technique even gives us the opportunity to control the amount of flickering. With these improvements, this is now a proper tool to help artists perform this 3D style transfer and create these rich virtual worlds much quicker and easier in the future. It also opens up the possibility for novices to do that, which is an amazing value proposition. Limitation still applies, for instance, if we have a texture with some regularity, such as this brick wall pattern here, the alignment and continuity of the bricks on the 3D model may suffer. This can be fixed, but it is a little labor intensive. However, you know what I'm saying, two more papers down the line and this will likely cease to be an issue. And what you've seen here today is just one paper down the line from the original work and we can do 4K resolution at 100 frames per second. Unreal! Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifahir."}, {"start": 4.0, "end": 8.0, "text": " Style Transfer is an interesting problem in machine learning research"}, {"start": 8.0, "end": 13.0, "text": " where we have two input images, one for content and one for style,"}, {"start": 13.0, "end": 17.0, "text": " and the output is our content image reimagined with this new style."}, {"start": 17.0, "end": 22.0, "text": " The cool part is that the content can be a photo straight from our camera"}, {"start": 22.0, "end": 28.0, "text": " and the style can be a painting which leads to super fun and really good looking results."}, {"start": 28.0, "end": 32.0, "text": " We have seen plenty of papers doing variations on style transfer"}, {"start": 32.0, "end": 36.0, "text": " but can we push this concept further? And the answer is yes."}, {"start": 36.0, "end": 42.0, "text": " For instance, few people know that style transfer can also be done in 3D."}, {"start": 42.0, "end": 46.0, "text": " If you look here, you see an artist performing this style transfer"}, {"start": 46.0, "end": 51.0, "text": " by drawing on a simple sphere and get their artistic style to carry over"}, {"start": 51.0, "end": 54.0, "text": " to a complicated piece of 3D geometry."}, {"start": 54.0, "end": 58.0, "text": " We talked about this technique in 2-minute papers episode 94"}, {"start": 58.0, "end": 62.0, "text": " and for your reference, we are currently at over episode 340."}, {"start": 62.0, "end": 64.0, "text": " Leave a comment if you've been around back then."}, {"start": 64.0, "end": 68.0, "text": " And this previous technique led to truly amazing results"}, {"start": 68.0, "end": 70.0, "text": " but still had two weak points."}, {"start": 70.0, "end": 72.0, "text": " One, it took too long."}, {"start": 72.0, "end": 78.0, "text": " As you see here, this method took around a minute or more to produce these results."}, {"start": 78.0, "end": 82.0, "text": " And hold on to your papers because this new paper is approximately"}, {"start": 82.0, "end": 88.0, "text": " a thousand times faster than that, which means that it can produce a hundred frames per second"}, {"start": 88.0, "end": 91.0, "text": " using a whopping 4K resolution."}, {"start": 91.0, "end": 96.0, "text": " But of course, none of this matters if the visual quality is not similar."}, {"start": 96.0, "end": 101.0, "text": " And if you look closely, you see that the new results are indeed really close"}, {"start": 101.0, "end": 104.0, "text": " to the reference results of the older method."}, {"start": 104.0, "end": 106.0, "text": " So, what was the other problem?"}, {"start": 106.0, "end": 109.0, "text": " The other problem was the lack of temporal coherence."}, {"start": 109.0, "end": 115.0, "text": " This means that when creating an animation, it seems like each of the individual frames of the animation"}, {"start": 115.0, "end": 118.0, "text": " were drawn separately by an artist."}, {"start": 118.0, "end": 122.0, "text": " In this new work, this is not only eliminated, as you see here,"}, {"start": 122.0, "end": 128.0, "text": " but the new technique even gives us the opportunity to control the amount of flickering."}, {"start": 128.0, "end": 134.0, "text": " With these improvements, this is now a proper tool to help artists perform this 3D style transfer"}, {"start": 134.0, "end": 140.0, "text": " and create these rich virtual worlds much quicker and easier in the future."}, {"start": 140.0, "end": 146.0, "text": " It also opens up the possibility for novices to do that, which is an amazing value proposition."}, {"start": 146.0, "end": 151.0, "text": " Limitation still applies, for instance, if we have a texture with some regularity,"}, {"start": 151.0, "end": 158.0, "text": " such as this brick wall pattern here, the alignment and continuity of the bricks on the 3D model may suffer."}, {"start": 158.0, "end": 161.0, "text": " This can be fixed, but it is a little labor intensive."}, {"start": 161.0, "end": 165.0, "text": " However, you know what I'm saying, two more papers down the line"}, {"start": 165.0, "end": 167.0, "text": " and this will likely cease to be an issue."}, {"start": 167.0, "end": 172.0, "text": " And what you've seen here today is just one paper down the line from the original work"}, {"start": 172.0, "end": 176.0, "text": " and we can do 4K resolution at 100 frames per second."}, {"start": 176.0, "end": 177.0, "text": " Unreal!"}, {"start": 177.0, "end": 192.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=pQA8Wzt8wdw
OpenAI's MuseNet Learned to Compose Mozart, Bon Jovi and More
📝 The blog post on OpenAI MuseNet is available here: https://openai.com/blog/musenet/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #OpenAI #MuseNet
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Not so long ago, OpenAI has released GPT2, an AI that was trained to look at a piece of text and perform common natural language processing operations on it, for instance, answering questions, summarization, and more. But today, we are going to be laser-focused on only one of those tasks, and that task is continuation, where we give an AI a bunch of text, and we ask it to continue it. However, as these learning algorithms are quite general by design, here comes the twist, who said that this can only work for text. Why not try it on composing music? So, let's have a look at some results where only the first six notes were given from a song, and the AI was asked to continue it. Love it! This is a great testament to the power of general learning algorithms. As you've heard, this works great for a variety of different genres as well, and not only that, but it can also create really cool blends between genres. Listen as the AI starts out from the first six notes of a Chopin piece and transitions into a pop style with a bunch of different instruments entering a few seconds in. And a great news, because if you look here, we can try our own combinations through an online demo as well. On the left side, we can specify and hear the short input sample and ask for a variety of different styles for the continuation. It is amazing fun, try it, I've put a link in the video description. I was particularly impressed with this combination. Now, this algorithm is also not without limitations as it has difficulties pairing instruments that either don't go too well together or there is lacking training data on how they should sound together. The source code is also either already available as of the publishing of this video or will be available soon. If so, I will come back and update the video description with the link. The AI has also published an almost two hour concert with tons of different genres, so make sure to head to the video description and check it out yourself. I think these techniques are either already so powerful or will soon be powerful enough to raise important copyright questions and will need plenty of discussions on who really owns this piece of music. What do you think? Let me know in the comments. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 10.72, "text": " Not so long ago, OpenAI has released GPT2, an AI that was trained to look at a piece"}, {"start": 10.72, "end": 16.6, "text": " of text and perform common natural language processing operations on it, for instance,"}, {"start": 16.6, "end": 20.56, "text": " answering questions, summarization, and more."}, {"start": 20.56, "end": 26.2, "text": " But today, we are going to be laser-focused on only one of those tasks, and that task"}, {"start": 26.2, "end": 32.0, "text": " is continuation, where we give an AI a bunch of text, and we ask it to continue it."}, {"start": 32.0, "end": 37.96, "text": " However, as these learning algorithms are quite general by design, here comes the twist,"}, {"start": 37.96, "end": 41.24, "text": " who said that this can only work for text."}, {"start": 41.24, "end": 43.68, "text": " Why not try it on composing music?"}, {"start": 43.68, "end": 48.879999999999995, "text": " So, let's have a look at some results where only the first six notes were given from"}, {"start": 48.88, "end": 61.84, "text": " a song, and the AI was asked to continue it."}, {"start": 78.88, "end": 95.72, "text": " Love it!"}, {"start": 95.72, "end": 100.28, "text": " This is a great testament to the power of general learning algorithms."}, {"start": 100.28, "end": 105.16, "text": " As you've heard, this works great for a variety of different genres as well, and not only"}, {"start": 105.16, "end": 109.56, "text": " that, but it can also create really cool blends between genres."}, {"start": 109.56, "end": 115.47999999999999, "text": " Listen as the AI starts out from the first six notes of a Chopin piece and transitions"}, {"start": 115.48, "end": 136.16, "text": " into a pop style with a bunch of different instruments entering a few seconds in."}, {"start": 136.16, "end": 141.04000000000002, "text": " And a great news, because if you look here, we can try our own combinations through an"}, {"start": 141.04000000000002, "end": 142.8, "text": " online demo as well."}, {"start": 142.8, "end": 148.28, "text": " On the left side, we can specify and hear the short input sample and ask for a variety"}, {"start": 148.28, "end": 151.08, "text": " of different styles for the continuation."}, {"start": 151.08, "end": 155.84, "text": " It is amazing fun, try it, I've put a link in the video description."}, {"start": 155.84, "end": 172.76000000000002, "text": " I was particularly impressed with this combination."}, {"start": 172.76, "end": 180.32, "text": " Now, this algorithm is also not without limitations as it has difficulties pairing instruments"}, {"start": 180.32, "end": 185.64, "text": " that either don't go too well together or there is lacking training data on how they"}, {"start": 185.64, "end": 187.39999999999998, "text": " should sound together."}, {"start": 187.39999999999998, "end": 193.0, "text": " The source code is also either already available as of the publishing of this video or will"}, {"start": 193.0, "end": 194.39999999999998, "text": " be available soon."}, {"start": 194.39999999999998, "end": 198.64, "text": " If so, I will come back and update the video description with the link."}, {"start": 198.64, "end": 204.16, "text": " The AI has also published an almost two hour concert with tons of different genres, so"}, {"start": 204.16, "end": 207.55999999999997, "text": " make sure to head to the video description and check it out yourself."}, {"start": 207.55999999999997, "end": 212.72, "text": " I think these techniques are either already so powerful or will soon be powerful enough"}, {"start": 212.72, "end": 217.64, "text": " to raise important copyright questions and will need plenty of discussions on who really"}, {"start": 217.64, "end": 220.04, "text": " owns this piece of music."}, {"start": 220.04, "end": 221.04, "text": " What do you think?"}, {"start": 221.04, "end": 222.04, "text": " Let me know in the comments."}, {"start": 222.04, "end": 229.16, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Y73iUAh56iI
We Can All Be Video Game Characters With This AI
📝 The paper "Vid2Game: Controllable Characters Extracted from Real-World Videos" is available here: https://arxiv.org/abs/1904.08379 ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #Vid2Game
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonai-Fehir. The title of this paper is very descriptive, it says, controllable characters extracted from real-world videos. This sounds a little like science fiction, so let's pick this apart. If we forget about the controllable part, we get something that you've seen in this series many times, pose estimation. Pose estimation means that we have a human character in an image or a video, have a computer program look at it and tell us the current position this character is taking. This is useful for medical applications such as detecting issues with motor functionality, fall detection, or we can also use it for motion capture for our video games and blockbuster movies. So just performing the pose estimation part is a great invention, but relatively old news. So what's really new here? Why is this work interesting? How does it go beyond pose estimation? Well, as a hint, the title contains an additional word controllable. So look at this. Woohoo! As you see, this technique is not only able to identify where a character is, but we can grab a controller and move it around. This means making this character perform novel actions and showing it from novel views. It's really remarkable because this requires a proper understanding of the video we are watching. And this means that we can not only watch these real world videos, as you see this small piece of footage used for the learning, but by performing these actions with the controller, we can make a video game out of it. Especially given that here, the background has also been changed. To achieve this, this work contains two key elements. Element number one is the pose-to-pose network that takes an input posture and the button we pushed on the controller and creates the next step of the animation. And then, element number two, the post-of-frame architecture blends this new animation step into an already existing image. The neural network that performs this is trained in a way where it is encouraged to create these character masks in a way that is continuous and doesn't contain jarring jumps between the individual frames leading to smooth and believable movements. Now clearly, anyone who takes a cursory look sees that the animations are not perfect and still contain artifacts, but just imagine that this paper is among the first introductory works on this problem. Just imagine what will have two more papers down the line. I can't wait. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Zonai-Fehir."}, {"start": 4.6000000000000005, "end": 10.72, "text": " The title of this paper is very descriptive, it says, controllable characters extracted from"}, {"start": 10.72, "end": 12.32, "text": " real-world videos."}, {"start": 12.32, "end": 16.72, "text": " This sounds a little like science fiction, so let's pick this apart."}, {"start": 16.72, "end": 20.72, "text": " If we forget about the controllable part, we get something that you've seen in this"}, {"start": 20.72, "end": 24.28, "text": " series many times, pose estimation."}, {"start": 24.28, "end": 29.400000000000002, "text": " Pose estimation means that we have a human character in an image or a video, have a computer"}, {"start": 29.4, "end": 34.64, "text": " program look at it and tell us the current position this character is taking."}, {"start": 34.64, "end": 40.519999999999996, "text": " This is useful for medical applications such as detecting issues with motor functionality,"}, {"start": 40.519999999999996, "end": 46.28, "text": " fall detection, or we can also use it for motion capture for our video games and blockbuster"}, {"start": 46.28, "end": 47.28, "text": " movies."}, {"start": 47.28, "end": 53.76, "text": " So just performing the pose estimation part is a great invention, but relatively old news."}, {"start": 53.76, "end": 55.96, "text": " So what's really new here?"}, {"start": 55.96, "end": 57.879999999999995, "text": " Why is this work interesting?"}, {"start": 57.88, "end": 60.28, "text": " How does it go beyond pose estimation?"}, {"start": 60.28, "end": 65.84, "text": " Well, as a hint, the title contains an additional word controllable."}, {"start": 65.84, "end": 68.48, "text": " So look at this."}, {"start": 68.48, "end": 69.48, "text": " Woohoo!"}, {"start": 69.48, "end": 75.08, "text": " As you see, this technique is not only able to identify where a character is, but we can"}, {"start": 75.08, "end": 78.2, "text": " grab a controller and move it around."}, {"start": 78.2, "end": 83.80000000000001, "text": " This means making this character perform novel actions and showing it from novel views."}, {"start": 83.8, "end": 88.28, "text": " It's really remarkable because this requires a proper understanding of the video we are"}, {"start": 88.28, "end": 89.36, "text": " watching."}, {"start": 89.36, "end": 93.72, "text": " And this means that we can not only watch these real world videos, as you see this small"}, {"start": 93.72, "end": 99.12, "text": " piece of footage used for the learning, but by performing these actions with the controller,"}, {"start": 99.12, "end": 109.03999999999999, "text": " we can make a video game out of it."}, {"start": 109.03999999999999, "end": 112.96, "text": " Especially given that here, the background has also been changed."}, {"start": 112.96, "end": 117.0, "text": " To achieve this, this work contains two key elements."}, {"start": 117.0, "end": 121.69999999999999, "text": " Element number one is the pose-to-pose network that takes an input posture and the button"}, {"start": 121.69999999999999, "end": 126.55999999999999, "text": " we pushed on the controller and creates the next step of the animation."}, {"start": 126.55999999999999, "end": 132.44, "text": " And then, element number two, the post-of-frame architecture blends this new animation step"}, {"start": 132.44, "end": 134.79999999999998, "text": " into an already existing image."}, {"start": 134.79999999999998, "end": 139.56, "text": " The neural network that performs this is trained in a way where it is encouraged to create"}, {"start": 139.56, "end": 145.0, "text": " these character masks in a way that is continuous and doesn't contain jarring jumps between"}, {"start": 145.0, "end": 149.6, "text": " the individual frames leading to smooth and believable movements."}, {"start": 149.6, "end": 155.36, "text": " Now clearly, anyone who takes a cursory look sees that the animations are not perfect and"}, {"start": 155.36, "end": 160.72, "text": " still contain artifacts, but just imagine that this paper is among the first introductory"}, {"start": 160.72, "end": 162.72, "text": " works on this problem."}, {"start": 162.72, "end": 166.04, "text": " Just imagine what will have two more papers down the line."}, {"start": 166.04, "end": 167.04, "text": " I can't wait."}, {"start": 167.04, "end": 170.88, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=38ZXwJj6j8k
All Hail The Mighty Translatotron!
❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers My talk and the full panel discussion at the NATO conference (I start at around the 12:30 minute mark): ▶️ https://www.facebook.com/StratComCOE/videos/698737203889068/ 📝 The paper "Direct speech-to-speech translation with a sequence-to-sequence model" and the voice samples are available here: https://arxiv.org/abs/1904.06037 https://google-research.github.io/lingvo-lab/translatotron/#conversational 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #Translatotron
Dear Fellow Scholars, this is two-minute papers with Karo Zorna Ifahir. Scientists at Google just released the Translate of Tron. This is an AI that is able to translate speech from one language into speech into another language, and here comes the first twist without using text as an intermediate representation. You give it the sound waves and you get the translated sound waves. And this neural network was trained approximately on a million voice samples. So, let's see what learning on this one million samples gives us. Listen, this is the input sentence in Spanish. And here it is translated to English, but using the voice of the same person. And you will not know unless you ask. Stop doing that. New levels and pliados date, quenfield university. New hires at quenfield university. This is incredible. However, there is another twist, perhaps an even bigger one, believe it or not. This technique can not only translate, but also perform voice transfer, so it can say the same thing using someone else's voice. This means that the AI not only has to learn what to say, but how to say it. This is immensely difficult. It's also not easy to know what we need to listen to and when. So, let me walk you through it. This is a sentence in Spanish. This is the same sentence said by someone else, an actual person and in English. Swimming with dolphins. And now, the same thing but synthesized by the algorithm using both of their voices. Swimming with dolphins. Swimming with dolphins. Let's listen to them side by side some more. Swimming with dolphins. Swimming with dolphins. This is so good. Let's have a look at some more examples. So look around the country and what do you see? So look around the country and what you see. So look around the country and what do you see? Wow. The method performs the learning by trying to map these male spectrograms between multiple speakers. You can see example sentences here and their corresponding spectrograms which are concise representations of someone's voice and intonation. And of course, it is difficult to mathematically formalize what makes a good translation and a good mimicking of someone's voice. So in these cases, we'll let people be the judge and have them listen to a few speech signals and asking them to guess which was the real person and which was the AI speaking. If you take a closer look at the paper, you will see that it smokes the competition. This is great progress on an immensely difficult task as we have to perform proper translation and voice transfer at the same time. It's quite a challenge. Of course, failure cases still exist. Listen. Then she is the cosa. Then, yeah, that's the thing. Then, yeah, that's the thing. Then, yeah, that's the thing. Then, yeah, that's the thing. Just imagine that you are in a foreign country and all you need to do is use your phone to test stories to people not only in their own language, but also using your own voice, even if you don't speak a word of their language. Beautiful. Even this video could perhaps be available in a variety of languages using my own voice within the next few years, although I wonder how these algorithms would pronounce my name. So far, that proved to be quite a challenge for humans and AI's alike. And for now, all hail the mighty translator, Tron. In the meantime, I just got back from this year's NATO conference. It was an incredible honor to get an invitation to speak at such an event and of course, I was happy to attend as a service to the public. The goal of the talk was to inform key political and military decision makers about recent developments in AI so they can make better decisions for us. And I was so nervous during the talk. My goodness. If you wish to watch it, I put a link to it in the video description and I may be able to upload a higher quality version of this video here in the future. Attending the conference introduced delays in our schedule, my apologies for that, and normally, we would have to worry whether because of this, we'll have enough income to improve our recording equipment. However, with your support on Patreon, this is not at all the case, so I want to send you a big thank you for all your amazing support. This was really all possible thanks to you. If you wish to support us, just go to patreon.com slash two minute papers or just click the link in the video description. Have fun with the video. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zorna Ifahir."}, {"start": 4.28, "end": 8.28, "text": " Scientists at Google just released the Translate of Tron."}, {"start": 8.28, "end": 15.280000000000001, "text": " This is an AI that is able to translate speech from one language into speech into another language,"}, {"start": 15.280000000000001, "end": 20.76, "text": " and here comes the first twist without using text as an intermediate representation."}, {"start": 20.76, "end": 24.8, "text": " You give it the sound waves and you get the translated sound waves."}, {"start": 24.8, "end": 30.0, "text": " And this neural network was trained approximately on a million voice samples."}, {"start": 30.0, "end": 34.32, "text": " So, let's see what learning on this one million samples gives us."}, {"start": 34.32, "end": 37.84, "text": " Listen, this is the input sentence in Spanish."}, {"start": 43.84, "end": 50.519999999999996, "text": " And here it is translated to English, but using the voice of the same person."}, {"start": 50.52, "end": 56.440000000000005, "text": " And you will not know unless you ask."}, {"start": 56.440000000000005, "end": 62.6, "text": " Stop doing that."}, {"start": 62.6, "end": 67.4, "text": " New levels and pliados date, quenfield university."}, {"start": 67.4, "end": 71.08, "text": " New hires at quenfield university."}, {"start": 71.08, "end": 72.88, "text": " This is incredible."}, {"start": 72.88, "end": 77.84, "text": " However, there is another twist, perhaps an even bigger one, believe it or not."}, {"start": 77.84, "end": 82.48, "text": " This technique can not only translate, but also perform voice transfer,"}, {"start": 82.48, "end": 86.12, "text": " so it can say the same thing using someone else's voice."}, {"start": 86.12, "end": 91.24000000000001, "text": " This means that the AI not only has to learn what to say, but how to say it."}, {"start": 91.24000000000001, "end": 93.76, "text": " This is immensely difficult."}, {"start": 93.76, "end": 97.28, "text": " It's also not easy to know what we need to listen to and when."}, {"start": 97.28, "end": 99.36, "text": " So, let me walk you through it."}, {"start": 99.36, "end": 105.60000000000001, "text": " This is a sentence in Spanish."}, {"start": 105.6, "end": 112.6, "text": " This is the same sentence said by someone else, an actual person and in English."}, {"start": 112.6, "end": 114.88, "text": " Swimming with dolphins."}, {"start": 114.88, "end": 121.52, "text": " And now, the same thing but synthesized by the algorithm using both of their voices."}, {"start": 121.52, "end": 124.39999999999999, "text": " Swimming with dolphins."}, {"start": 124.39999999999999, "end": 126.75999999999999, "text": " Swimming with dolphins."}, {"start": 126.75999999999999, "end": 130.24, "text": " Let's listen to them side by side some more."}, {"start": 130.24, "end": 132.79999999999998, "text": " Swimming with dolphins."}, {"start": 132.8, "end": 135.84, "text": " Swimming with dolphins."}, {"start": 135.84, "end": 137.84, "text": " This is so good."}, {"start": 137.84, "end": 142.52, "text": " Let's have a look at some more examples."}, {"start": 142.52, "end": 152.48000000000002, "text": " So look around the country and what do you see?"}, {"start": 152.48000000000002, "end": 158.60000000000002, "text": " So look around the country and what you see."}, {"start": 158.6, "end": 165.32, "text": " So look around the country and what do you see?"}, {"start": 165.32, "end": 167.07999999999998, "text": " Wow."}, {"start": 167.07999999999998, "end": 172.48, "text": " The method performs the learning by trying to map these male spectrograms between multiple"}, {"start": 172.48, "end": 173.56, "text": " speakers."}, {"start": 173.56, "end": 178.56, "text": " You can see example sentences here and their corresponding spectrograms which are concise"}, {"start": 178.56, "end": 182.68, "text": " representations of someone's voice and intonation."}, {"start": 182.68, "end": 187.92, "text": " And of course, it is difficult to mathematically formalize what makes a good translation"}, {"start": 187.92, "end": 190.51999999999998, "text": " and a good mimicking of someone's voice."}, {"start": 190.51999999999998, "end": 195.88, "text": " So in these cases, we'll let people be the judge and have them listen to a few speech signals"}, {"start": 195.88, "end": 201.88, "text": " and asking them to guess which was the real person and which was the AI speaking."}, {"start": 201.88, "end": 206.79999999999998, "text": " If you take a closer look at the paper, you will see that it smokes the competition."}, {"start": 206.79999999999998, "end": 212.35999999999999, "text": " This is great progress on an immensely difficult task as we have to perform proper translation"}, {"start": 212.35999999999999, "end": 215.72, "text": " and voice transfer at the same time."}, {"start": 215.72, "end": 217.27999999999997, "text": " It's quite a challenge."}, {"start": 217.28, "end": 220.4, "text": " Of course, failure cases still exist."}, {"start": 220.4, "end": 221.4, "text": " Listen."}, {"start": 221.4, "end": 224.4, "text": " Then she is the cosa."}, {"start": 224.4, "end": 228.4, "text": " Then, yeah, that's the thing."}, {"start": 228.4, "end": 236.08, "text": " Then, yeah, that's the thing."}, {"start": 236.08, "end": 240.48, "text": " Then, yeah, that's the thing."}, {"start": 240.48, "end": 247.08, "text": " Then, yeah, that's the thing."}, {"start": 247.08, "end": 256.68, "text": " Just imagine that you are in a foreign country and all you need to do is use your phone to"}, {"start": 256.68, "end": 263.08000000000004, "text": " test stories to people not only in their own language, but also using your own voice,"}, {"start": 263.08000000000004, "end": 266.48, "text": " even if you don't speak a word of their language."}, {"start": 266.48, "end": 267.8, "text": " Beautiful."}, {"start": 267.8, "end": 273.16, "text": " Even this video could perhaps be available in a variety of languages using my own voice"}, {"start": 273.16, "end": 279.16, "text": " within the next few years, although I wonder how these algorithms would pronounce my name."}, {"start": 279.16, "end": 283.76000000000005, "text": " So far, that proved to be quite a challenge for humans and AI's alike."}, {"start": 283.76000000000005, "end": 287.36, "text": " And for now, all hail the mighty translator, Tron."}, {"start": 287.36, "end": 291.36, "text": " In the meantime, I just got back from this year's NATO conference."}, {"start": 291.36, "end": 296.48, "text": " It was an incredible honor to get an invitation to speak at such an event and of course, I was"}, {"start": 296.48, "end": 299.08000000000004, "text": " happy to attend as a service to the public."}, {"start": 299.08, "end": 304.68, "text": " The goal of the talk was to inform key political and military decision makers about recent developments"}, {"start": 304.68, "end": 308.52, "text": " in AI so they can make better decisions for us."}, {"start": 308.52, "end": 311.47999999999996, "text": " And I was so nervous during the talk."}, {"start": 311.47999999999996, "end": 312.68, "text": " My goodness."}, {"start": 312.68, "end": 317.36, "text": " If you wish to watch it, I put a link to it in the video description and I may be able"}, {"start": 317.36, "end": 321.91999999999996, "text": " to upload a higher quality version of this video here in the future."}, {"start": 321.91999999999996, "end": 326.64, "text": " Attending the conference introduced delays in our schedule, my apologies for that, and"}, {"start": 326.64, "end": 331.88, "text": " normally, we would have to worry whether because of this, we'll have enough income to improve"}, {"start": 331.88, "end": 333.56, "text": " our recording equipment."}, {"start": 333.56, "end": 338.59999999999997, "text": " However, with your support on Patreon, this is not at all the case, so I want to send you"}, {"start": 338.59999999999997, "end": 341.71999999999997, "text": " a big thank you for all your amazing support."}, {"start": 341.71999999999997, "end": 344.47999999999996, "text": " This was really all possible thanks to you."}, {"start": 344.47999999999996, "end": 350.08, "text": " If you wish to support us, just go to patreon.com slash two minute papers or just click the"}, {"start": 350.08, "end": 352.08, "text": " link in the video description."}, {"start": 352.08, "end": 353.24, "text": " Have fun with the video."}, {"start": 353.24, "end": 357.12, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=aJq6ygTWdao
This AI Makes Amazing DeepFakes…and More!
Check out Lambda Labs here: https://lambdalabs.com/papers 📝 The paper "Deferred Neural Rendering: Image Synthesis using Neural Textures" is available here: https://niessnerlab.org/projects/thies2019neural.html My earlier work on neural rendering in the first part of the video is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #Deepfake
This episode has been supported by Lambda Labs. Dear Fellow Scholars, this is two-minute papers with Karo Jorna Ifahir. In our earlier paper, Gaussian material synthesis, we made a neural renderer, and what this neural renderer was able to do is reproduce the results of a light transport simulation within 4 to 6 milliseconds in a way that is almost pixel-perfect. It took a fixed camera and scene, and we were able to come up with a ton of different materials, and it was always able to guess what the output would look like if we changed the physical properties of a material. This is a perfect setup for material synthesis, where these restrictions are not too limiting. Trying to perform high-quality neural rendering has been a really important research problem lately, and everyone is asking the question, can we do more with this? Can we move around with the camera and have a neural network predict what the scene would look like? Can we do this with animations? Well, have a look at this new paper, which is a collaboration between researchers at the Technical University of Munich and Stanford University, where all we need is some video footage of a person or object. It takes a close look at this kind of information and can offer three killer applications. One, it can synthesize the object from new viewpoints. Two, it can create a video of this scene and imagine what it would look like if we reorganized it, or can even add more objects to it. And three, perhaps everyone's favorite, performing facial reenactment from a source to a target actor. As with many other methods, these neural textures are stored on top of the 3D objects, however, a more detailed, high-dimensional description is also stored and learned by this algorithm, which enables it to have a deeper understanding of intricate light-transport effects to create these new views. For instance, it is particularly good at reproducing specular highlights, which typically change rapidly as we change our viewpoint for the object. One of the main challenges was building a learning algorithm that can deal with this kind of complexity. The synthesis of mouth movements was always the Achilles' heel of these methods, so have a look at how well this one does with it. You can also see with the comparisons here that in general, this new technique smokes the competition. So how much training data do we need to achieve this? I would imagine that this would take hours and hours of video footage, right? No, not at all. This is what the results look like as a function of the amount of training data. On the left, you see that it already kind of works with 125 images, but contains artifacts, but if we can supply 1000 images, we're good. Note that 1000 images sounds like a lot, but it really isn't, it's just half a minute worth of video. How crazy is that? Some limitations still apply. You see one failure case here, and the neural network typically needs to be retrained if we wish to use it on new objects, but this work finally generalizes to multiple viewpoints, animation, scene editing, lots of different materials and geometries, and I can only imagine what we'll get two more papers down the line. Respect to used tools for accomplishing this, and in general, make sure to have a look at Matthias Neesner's lab who just got tenured as a full professor and he's only 32 years old. Congratulations. If you have AI-related ideas and you would like to try them, but not do it in the cloud because you wish to own your own hardware, look no further than Lambda Labs. Lambda Labs offers sleek, beautifully designed laptops, workstations and servers that come pre-installed with every major learning framework and updates them for you, taking care of all the dependencies. Look at those beautiful and powerful machines. This way, you can spend more if your time with your ideas and don't have to deal with all the software maintenance work. Make sure to go to lambdalabs.com, slash papers, or click their link in the video description and look around, and if you have any questions, you can even call them for advice. Big thanks to Lambdalabs for supporting this video and helping us make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 2.88, "text": " This episode has been supported by Lambda Labs."}, {"start": 2.88, "end": 6.88, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Jorna Ifahir."}, {"start": 6.88, "end": 12.08, "text": " In our earlier paper, Gaussian material synthesis, we made a neural renderer,"}, {"start": 12.08, "end": 18.400000000000002, "text": " and what this neural renderer was able to do is reproduce the results of a light transport simulation"}, {"start": 18.400000000000002, "end": 23.6, "text": " within 4 to 6 milliseconds in a way that is almost pixel-perfect."}, {"start": 23.6, "end": 29.6, "text": " It took a fixed camera and scene, and we were able to come up with a ton of different materials,"}, {"start": 29.6, "end": 35.52, "text": " and it was always able to guess what the output would look like if we changed the physical properties"}, {"start": 35.52, "end": 41.36, "text": " of a material. This is a perfect setup for material synthesis, where these restrictions are not"}, {"start": 41.36, "end": 46.56, "text": " too limiting. Trying to perform high-quality neural rendering has been a really important"}, {"start": 46.56, "end": 52.0, "text": " research problem lately, and everyone is asking the question, can we do more with this?"}, {"start": 52.0, "end": 57.2, "text": " Can we move around with the camera and have a neural network predict what the scene would look like?"}, {"start": 57.2, "end": 63.2, "text": " Can we do this with animations? Well, have a look at this new paper, which is a collaboration"}, {"start": 63.2, "end": 68.24000000000001, "text": " between researchers at the Technical University of Munich and Stanford University,"}, {"start": 68.24000000000001, "end": 72.64, "text": " where all we need is some video footage of a person or object."}, {"start": 72.64, "end": 78.0, "text": " It takes a close look at this kind of information and can offer three killer applications."}, {"start": 78.64, "end": 81.84, "text": " One, it can synthesize the object from new viewpoints."}, {"start": 81.84, "end": 91.36, "text": " Two, it can create a video of this scene and imagine what it would look like if we reorganized it,"}, {"start": 91.36, "end": 117.36, "text": " or can even add more objects to it. And three, perhaps everyone's favorite, performing facial"}, {"start": 117.36, "end": 123.76, "text": " reenactment from a source to a target actor. As with many other methods, these neural textures"}, {"start": 123.76, "end": 129.84, "text": " are stored on top of the 3D objects, however, a more detailed, high-dimensional description"}, {"start": 129.84, "end": 135.76, "text": " is also stored and learned by this algorithm, which enables it to have a deeper understanding"}, {"start": 135.76, "end": 141.84, "text": " of intricate light-transport effects to create these new views. For instance, it is particularly good"}, {"start": 141.84, "end": 147.12, "text": " at reproducing specular highlights, which typically change rapidly as we change our viewpoint"}, {"start": 147.12, "end": 152.64000000000001, "text": " for the object. One of the main challenges was building a learning algorithm that can deal"}, {"start": 152.64000000000001, "end": 158.24, "text": " with this kind of complexity. The synthesis of mouth movements was always the Achilles' heel"}, {"start": 158.24, "end": 162.96, "text": " of these methods, so have a look at how well this one does with it. You can also see with the"}, {"start": 162.96, "end": 169.20000000000002, "text": " comparisons here that in general, this new technique smokes the competition. So how much training"}, {"start": 169.20000000000002, "end": 174.56, "text": " data do we need to achieve this? I would imagine that this would take hours and hours of video"}, {"start": 174.56, "end": 181.28, "text": " footage, right? No, not at all. This is what the results look like as a function of the amount"}, {"start": 181.28, "end": 187.92000000000002, "text": " of training data. On the left, you see that it already kind of works with 125 images,"}, {"start": 187.92000000000002, "end": 195.44, "text": " but contains artifacts, but if we can supply 1000 images, we're good. Note that 1000 images"}, {"start": 195.44, "end": 199.76, "text": " sounds like a lot, but it really isn't, it's just half a minute worth of video."}, {"start": 199.76, "end": 207.04, "text": " How crazy is that? Some limitations still apply. You see one failure case here, and the neural"}, {"start": 207.04, "end": 212.88, "text": " network typically needs to be retrained if we wish to use it on new objects, but this work finally"}, {"start": 212.88, "end": 219.04, "text": " generalizes to multiple viewpoints, animation, scene editing, lots of different materials and"}, {"start": 219.04, "end": 224.48, "text": " geometries, and I can only imagine what we'll get two more papers down the line. Respect to"}, {"start": 224.48, "end": 229.76, "text": " used tools for accomplishing this, and in general, make sure to have a look at Matthias Neesner's lab"}, {"start": 229.76, "end": 235.6, "text": " who just got tenured as a full professor and he's only 32 years old. Congratulations."}, {"start": 236.32, "end": 241.67999999999998, "text": " If you have AI-related ideas and you would like to try them, but not do it in the cloud because"}, {"start": 241.67999999999998, "end": 248.0, "text": " you wish to own your own hardware, look no further than Lambda Labs. Lambda Labs offers sleek,"}, {"start": 248.0, "end": 253.67999999999998, "text": " beautifully designed laptops, workstations and servers that come pre-installed with every major"}, {"start": 253.68, "end": 259.2, "text": " learning framework and updates them for you, taking care of all the dependencies. Look at those"}, {"start": 259.2, "end": 264.64, "text": " beautiful and powerful machines. This way, you can spend more if your time with your ideas and"}, {"start": 264.64, "end": 270.08, "text": " don't have to deal with all the software maintenance work. Make sure to go to lambdalabs.com,"}, {"start": 270.08, "end": 276.0, "text": " slash papers, or click their link in the video description and look around, and if you have any"}, {"start": 276.0, "end": 281.68, "text": " questions, you can even call them for advice. Big thanks to Lambdalabs for supporting this video"}, {"start": 281.68, "end": 286.0, "text": " and helping us make better videos for you. Thanks for watching and for your generous support,"}, {"start": 286.0, "end": 315.84, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=goD36hVVl7M
How Can This Liquid Climb?
📝 The paper "On the Accurate Large-scale Simulation of Ferrofluids" is available here: http://computationalsciences.org/publications/huang-2019-ferrofluids.html ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #Ferrofluids
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. You are in for a real treat today because today we are not going to simulate just plain regular fluids. No, we are going to simulate ferrofluids. These are fluids that have magnetic properties and respond to an external magnetic field and you will see in a moment that they are able to even climb things. You see in the reference footage here that this also means if there is no magnetic field we have a regular fluid simulation. Nothing too crazy here. In this real-world footage we have a tray of ferrofluid up in the air and we have a magnet below it so as the tray descends down and gets closer to the magnet, this happens. But the strength of the magnetic field is not the only factor that a simulation needs to take into account. Here is another real experiment that shows that the orientation of the magnet also makes a great deal of difference to the distortions of the fluid surface. And now let's have a look at some simulations. This simulation reproduces the rotating magnet experiment that you've seen a second ago. It works great, what is even more if we are in a simulation we can finally do things that would either be expensive or impossible in the real life so let's do exactly that. You see a steel sphere attracting the ferrofluid here and now the strength of the magnet within is decreased giving us the impression that we can bend this fluid to our will. How cool is that? In the simulation we can also experiment with arbitrarily shaped magnets. And here's the legendary real experiment where with magnetism we can make a ferrofluid climb up on the steel helix. Look at that, when I first seen this video and started reading the paper I was just giggling like a little girl. So good. Just imagine how hard it is to do something where we have footage from the real world that keeps judging our simulation results and we are only done when there's a near exact match such as the one you see here. Huge congratulations to the authors. You see here how the simulation output depends on the number of iterations. More iterations means that we redo the calculations over and over again and get results closer to what would happen in real life at the cost of more computation time. However, as you see we can get close to the real solution with even one iteration which is remarkable. In my own fluid simulation experiments when I tried to solve the pressure field using one to four iterations give me a result that's not only inaccurate but singular which blows up the simulation. Look at this. On this axis you can see how the fluid disturbances get more pronounced as a response to a stronger magnetic field. And in this direction you see how the effect of surface tension smooths out these shapes. What a visualization. The information density in this example is just out of this world and it is still both informative and beautiful. If only I could tell you how many times I have to remake each of the figures in pursuit of this I can only imagine how long it took to finish this one. Bravo. And if all that's not enough for you to fall out of your chair, get this. It is about Libel Huang, the first author of this paper. I became quite curious about his other works and have found exactly zero of them. This was his first paper. My goodness. And of course it takes a team to create such a work so congratulations to all three authors. This is one heck of a paper. Check it out in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 5.0, "end": 11.44, "text": " You are in for a real treat today because today we are not going to simulate just plain"}, {"start": 11.44, "end": 12.84, "text": " regular fluids."}, {"start": 12.84, "end": 16.240000000000002, "text": " No, we are going to simulate ferrofluids."}, {"start": 16.240000000000002, "end": 21.84, "text": " These are fluids that have magnetic properties and respond to an external magnetic field and"}, {"start": 21.84, "end": 26.2, "text": " you will see in a moment that they are able to even climb things."}, {"start": 26.2, "end": 30.96, "text": " You see in the reference footage here that this also means if there is no magnetic field"}, {"start": 30.96, "end": 33.76, "text": " we have a regular fluid simulation."}, {"start": 33.76, "end": 35.8, "text": " Nothing too crazy here."}, {"start": 35.8, "end": 41.04, "text": " In this real-world footage we have a tray of ferrofluid up in the air and we have a magnet"}, {"start": 41.04, "end": 48.879999999999995, "text": " below it so as the tray descends down and gets closer to the magnet, this happens."}, {"start": 48.879999999999995, "end": 53.28, "text": " But the strength of the magnetic field is not the only factor that a simulation needs"}, {"start": 53.28, "end": 54.84, "text": " to take into account."}, {"start": 54.84, "end": 59.96, "text": " Here is another real experiment that shows that the orientation of the magnet also makes"}, {"start": 59.96, "end": 65.28, "text": " a great deal of difference to the distortions of the fluid surface."}, {"start": 65.28, "end": 68.32000000000001, "text": " And now let's have a look at some simulations."}, {"start": 68.32000000000001, "end": 74.08000000000001, "text": " This simulation reproduces the rotating magnet experiment that you've seen a second ago."}, {"start": 74.08000000000001, "end": 80.28, "text": " It works great, what is even more if we are in a simulation we can finally do things"}, {"start": 80.28, "end": 86.56, "text": " that would either be expensive or impossible in the real life so let's do exactly that."}, {"start": 86.56, "end": 97.44, "text": " You see a steel sphere attracting the ferrofluid here and now the strength of the magnet within"}, {"start": 97.44, "end": 102.4, "text": " is decreased giving us the impression that we can bend this fluid to our will."}, {"start": 102.4, "end": 104.48, "text": " How cool is that?"}, {"start": 104.48, "end": 110.72, "text": " In the simulation we can also experiment with arbitrarily shaped magnets."}, {"start": 110.72, "end": 116.04, "text": " And here's the legendary real experiment where with magnetism we can make a ferrofluid"}, {"start": 116.04, "end": 119.48, "text": " climb up on the steel helix."}, {"start": 119.48, "end": 124.80000000000001, "text": " Look at that, when I first seen this video and started reading the paper I was just giggling"}, {"start": 124.80000000000001, "end": 126.56, "text": " like a little girl."}, {"start": 126.56, "end": 128.2, "text": " So good."}, {"start": 128.2, "end": 132.92000000000002, "text": " Just imagine how hard it is to do something where we have footage from the real world that"}, {"start": 132.92, "end": 138.07999999999998, "text": " keeps judging our simulation results and we are only done when there's a near exact"}, {"start": 138.07999999999998, "end": 140.76, "text": " match such as the one you see here."}, {"start": 140.76, "end": 142.88, "text": " Huge congratulations to the authors."}, {"start": 142.88, "end": 147.6, "text": " You see here how the simulation output depends on the number of iterations."}, {"start": 147.6, "end": 153.44, "text": " More iterations means that we redo the calculations over and over again and get results closer to"}, {"start": 153.44, "end": 158.16, "text": " what would happen in real life at the cost of more computation time."}, {"start": 158.16, "end": 163.88, "text": " However, as you see we can get close to the real solution with even one iteration which"}, {"start": 163.88, "end": 165.32, "text": " is remarkable."}, {"start": 165.32, "end": 169.56, "text": " In my own fluid simulation experiments when I tried to solve the pressure field using"}, {"start": 169.56, "end": 175.6, "text": " one to four iterations give me a result that's not only inaccurate but singular which blows"}, {"start": 175.6, "end": 178.12, "text": " up the simulation."}, {"start": 178.12, "end": 179.16, "text": " Look at this."}, {"start": 179.16, "end": 184.84, "text": " On this axis you can see how the fluid disturbances get more pronounced as a response to a stronger"}, {"start": 184.84, "end": 186.32, "text": " magnetic field."}, {"start": 186.32, "end": 192.4, "text": " And in this direction you see how the effect of surface tension smooths out these shapes."}, {"start": 192.4, "end": 194.16, "text": " What a visualization."}, {"start": 194.16, "end": 199.6, "text": " The information density in this example is just out of this world and it is still both"}, {"start": 199.6, "end": 202.07999999999998, "text": " informative and beautiful."}, {"start": 202.07999999999998, "end": 207.04, "text": " If only I could tell you how many times I have to remake each of the figures in pursuit"}, {"start": 207.04, "end": 211.35999999999999, "text": " of this I can only imagine how long it took to finish this one."}, {"start": 211.35999999999999, "end": 212.35999999999999, "text": " Bravo."}, {"start": 212.36, "end": 216.68, "text": " And if all that's not enough for you to fall out of your chair, get this."}, {"start": 216.68, "end": 220.08, "text": " It is about Libel Huang, the first author of this paper."}, {"start": 220.08, "end": 226.16000000000003, "text": " I became quite curious about his other works and have found exactly zero of them."}, {"start": 226.16000000000003, "end": 228.4, "text": " This was his first paper."}, {"start": 228.4, "end": 229.72000000000003, "text": " My goodness."}, {"start": 229.72000000000003, "end": 235.24, "text": " And of course it takes a team to create such a work so congratulations to all three authors."}, {"start": 235.24, "end": 237.20000000000002, "text": " This is one heck of a paper."}, {"start": 237.20000000000002, "end": 239.08, "text": " Check it out in the video description."}, {"start": 239.08, "end": 242.96, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=f9z1I_81_Q4
DeepMind Made a Math Test For Neural Networks
📝 The paper "Analysing Mathematical Reasoning Abilities of Neural Models" is available here: https://arxiv.org/abs/1904.01557 ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zonaifahir. This paper from DeepMind is about taking a bunch of learning algorithms and torturing them with millions of classic math questions to find out if they can solve them. Sounds great, right? I wonder what kind of math questions would an AI find easy to solve? What percentage of these can a good learning algorithm answer today? Worry not, we'll discuss some of the results at the end of this video. These kinds of problems are typically solved by recurrent neural networks that are able to read and produce sequences of data and to even begin to understand what the question is here, an AI would have to understand the concept of functions, variables, arithmetic operators, and of course, the words that form the question itself. It has to learn planning and precedence, that is, in what order do we evaluate such an expression and it has to have some sort of memory in which it can store the intermediate results? The main goal of this paper is to describe a data set that is designed in a very specific way to be able to benchmark the mathematical reasoning abilities of an AI. So how do we do that? First, it is made in a way that it's very difficult to solve for someone without generalized knowledge. Imagine the kind of student at school who memorized everything from the textbooks but has no understanding of the underlying tasks and if the teacher changes just one number in a question, the student is unable to solve the problem. We all met that kind of student, right? Well, this test is designed in a way that students like these should fail at it. Of course, in our case, the student is the AI. Second, the questions should be modular. This is a huge advantage because a large number of these questions can be generated procedurally by adding a different combination of sub-tasks such as additions, function evaluations, and more. An additional advantage of this is that we can easily control the difficulty of these questions. The more modules we use, typically the more difficult the question gets. Third, the questions and answers should be able to come in any form. This is an advantage because the AI has to not only understand the mathematical expressions but also focus on what exactly we wish to know about them. This also means that the question itself can be about factorization where the answer is expected to be either true or false. And the algorithm is not told we are looking for a true or false answer it has to be able to infer this from the question itself. And to be able to tackle all this properly with this paper, the authors released two million of these questions for training an AI free of charge to foster more future research in this direction. I wonder what percentage of these can a good learning algorithm answer today. Let's have a look at some results. A neural network model that goes by the name Transformer Network produced the best results by being able to answer 50% of the questions. This you find in the extrapolation column here. When you look at the interpolation column, you see that it successfully answered 76% of these questions. So which one is it? 50% or 76%. Actually, both. The difference is that interpolation means that the numbers in these questions were within the bounds that was seen in the training data where extrapolation means that some of these numbers are potentially much larger or smaller than others that the AI has seen in the training examples. I would say that given the difficulty of just even understanding what these questions are, these are really great results. Generally, in the future, we will be looking for algorithms that do well on the extrapolation tasks because these are the AI's that have knowledge that generalize as well. So which tasks were easy and which were difficult? Interestingly, the AI had similar difficulties as we follow humans have, namely rounding decimals and integers, comparisons, basic algebra was quite easy for it, whereas detecting primality and factorization were not very accurate. I will keep an eye out on improvements in this area. If you're interested to hear more about it, make sure to subscribe to this series. And if you just push the red button, you may think that you're subscribed, but you're not. You are just kind of subscribed. Make sure to also click the bell icon to not miss these future episodes. Also, please make sure to read the paper. It is quite readable and contains a lot more really cool insights about this data set and the experiments. As always, the link is available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonaifahir."}, {"start": 4.28, "end": 8.6, "text": " This paper from DeepMind is about taking a bunch of learning algorithms"}, {"start": 8.6, "end": 12.44, "text": " and torturing them with millions of classic math questions"}, {"start": 12.44, "end": 14.76, "text": " to find out if they can solve them."}, {"start": 14.76, "end": 16.52, "text": " Sounds great, right?"}, {"start": 16.52, "end": 21.28, "text": " I wonder what kind of math questions would an AI find easy to solve?"}, {"start": 21.28, "end": 25.52, "text": " What percentage of these can a good learning algorithm answer today?"}, {"start": 25.52, "end": 29.16, "text": " Worry not, we'll discuss some of the results at the end of this video."}, {"start": 29.16, "end": 33.16, "text": " These kinds of problems are typically solved by recurrent neural networks"}, {"start": 33.16, "end": 36.96, "text": " that are able to read and produce sequences of data"}, {"start": 36.96, "end": 40.36, "text": " and to even begin to understand what the question is here,"}, {"start": 40.36, "end": 45.400000000000006, "text": " an AI would have to understand the concept of functions, variables,"}, {"start": 45.400000000000006, "end": 50.400000000000006, "text": " arithmetic operators, and of course, the words that form the question itself."}, {"start": 50.400000000000006, "end": 52.8, "text": " It has to learn planning and precedence,"}, {"start": 52.8, "end": 56.56, "text": " that is, in what order do we evaluate such an expression"}, {"start": 56.56, "end": 61.64, "text": " and it has to have some sort of memory in which it can store the intermediate results?"}, {"start": 61.64, "end": 64.96000000000001, "text": " The main goal of this paper is to describe a data set"}, {"start": 64.96000000000001, "end": 68.84, "text": " that is designed in a very specific way to be able to benchmark"}, {"start": 68.84, "end": 72.24000000000001, "text": " the mathematical reasoning abilities of an AI."}, {"start": 72.24000000000001, "end": 73.80000000000001, "text": " So how do we do that?"}, {"start": 73.80000000000001, "end": 77.88, "text": " First, it is made in a way that it's very difficult to solve for someone"}, {"start": 77.88, "end": 79.68, "text": " without generalized knowledge."}, {"start": 79.68, "end": 84.48, "text": " Imagine the kind of student at school who memorized everything from the textbooks"}, {"start": 84.48, "end": 87.60000000000001, "text": " but has no understanding of the underlying tasks"}, {"start": 87.60000000000001, "end": 90.88000000000001, "text": " and if the teacher changes just one number in a question,"}, {"start": 90.88000000000001, "end": 93.68, "text": " the student is unable to solve the problem."}, {"start": 93.68, "end": 96.0, "text": " We all met that kind of student, right?"}, {"start": 96.0, "end": 101.44, "text": " Well, this test is designed in a way that students like these should fail at it."}, {"start": 101.44, "end": 105.04, "text": " Of course, in our case, the student is the AI."}, {"start": 105.04, "end": 108.04, "text": " Second, the questions should be modular."}, {"start": 108.04, "end": 111.88000000000001, "text": " This is a huge advantage because a large number of these questions"}, {"start": 111.88, "end": 117.16, "text": " can be generated procedurally by adding a different combination of sub-tasks"}, {"start": 117.16, "end": 121.08, "text": " such as additions, function evaluations, and more."}, {"start": 121.08, "end": 126.67999999999999, "text": " An additional advantage of this is that we can easily control the difficulty of these questions."}, {"start": 126.67999999999999, "end": 131.4, "text": " The more modules we use, typically the more difficult the question gets."}, {"start": 131.4, "end": 136.44, "text": " Third, the questions and answers should be able to come in any form."}, {"start": 136.44, "end": 141.68, "text": " This is an advantage because the AI has to not only understand the mathematical expressions"}, {"start": 141.68, "end": 145.68, "text": " but also focus on what exactly we wish to know about them."}, {"start": 145.68, "end": 150.24, "text": " This also means that the question itself can be about factorization"}, {"start": 150.24, "end": 154.0, "text": " where the answer is expected to be either true or false."}, {"start": 154.0, "end": 158.56, "text": " And the algorithm is not told we are looking for a true or false answer"}, {"start": 158.56, "end": 162.32, "text": " it has to be able to infer this from the question itself."}, {"start": 162.32, "end": 165.84, "text": " And to be able to tackle all this properly with this paper,"}, {"start": 165.84, "end": 169.44, "text": " the authors released two million of these questions"}, {"start": 169.44, "end": 175.28, "text": " for training an AI free of charge to foster more future research in this direction."}, {"start": 175.28, "end": 180.32, "text": " I wonder what percentage of these can a good learning algorithm answer today."}, {"start": 180.32, "end": 182.16, "text": " Let's have a look at some results."}, {"start": 182.16, "end": 186.07999999999998, "text": " A neural network model that goes by the name Transformer Network"}, {"start": 186.07999999999998, "end": 191.68, "text": " produced the best results by being able to answer 50% of the questions."}, {"start": 191.68, "end": 194.8, "text": " This you find in the extrapolation column here."}, {"start": 194.8, "end": 202.4, "text": " When you look at the interpolation column, you see that it successfully answered 76% of these questions."}, {"start": 202.4, "end": 207.20000000000002, "text": " So which one is it? 50% or 76%."}, {"start": 207.20000000000002, "end": 209.04000000000002, "text": " Actually, both."}, {"start": 209.04000000000002, "end": 213.60000000000002, "text": " The difference is that interpolation means that the numbers in these questions"}, {"start": 213.60000000000002, "end": 217.20000000000002, "text": " were within the bounds that was seen in the training data"}, {"start": 217.20000000000002, "end": 222.16000000000003, "text": " where extrapolation means that some of these numbers are potentially much larger"}, {"start": 222.16, "end": 227.04, "text": " or smaller than others that the AI has seen in the training examples."}, {"start": 227.04, "end": 232.64, "text": " I would say that given the difficulty of just even understanding what these questions are,"}, {"start": 232.64, "end": 234.96, "text": " these are really great results."}, {"start": 234.96, "end": 240.96, "text": " Generally, in the future, we will be looking for algorithms that do well on the extrapolation tasks"}, {"start": 240.96, "end": 244.64, "text": " because these are the AI's that have knowledge that generalize as well."}, {"start": 245.44, "end": 249.2, "text": " So which tasks were easy and which were difficult?"}, {"start": 249.2, "end": 254.48, "text": " Interestingly, the AI had similar difficulties as we follow humans have,"}, {"start": 254.48, "end": 258.71999999999997, "text": " namely rounding decimals and integers, comparisons,"}, {"start": 258.71999999999997, "end": 266.0, "text": " basic algebra was quite easy for it, whereas detecting primality and factorization were not very accurate."}, {"start": 266.0, "end": 269.03999999999996, "text": " I will keep an eye out on improvements in this area."}, {"start": 269.03999999999996, "end": 273.03999999999996, "text": " If you're interested to hear more about it, make sure to subscribe to this series."}, {"start": 273.03999999999996, "end": 277.84, "text": " And if you just push the red button, you may think that you're subscribed, but you're not."}, {"start": 277.84, "end": 280.4, "text": " You are just kind of subscribed."}, {"start": 280.4, "end": 284.47999999999996, "text": " Make sure to also click the bell icon to not miss these future episodes."}, {"start": 284.47999999999996, "end": 287.11999999999995, "text": " Also, please make sure to read the paper."}, {"start": 287.11999999999995, "end": 293.28, "text": " It is quite readable and contains a lot more really cool insights about this data set and the experiments."}, {"start": 293.28, "end": 296.32, "text": " As always, the link is available in the video description."}, {"start": 296.32, "end": 308.8, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=SfvRhqsmU4o
NVIDIA’s AI Transformed My Chihuahua Into a Lion
Check out Lambda Labs here: https://lambdalabs.com/papers 📝 The paper "Few-Shot Unsupervised Image-to-Image Translation" and its demo is available here: https://nvlabs.github.io/FUNIT/ https://nvlabs.github.io/FUNIT/petswap.html 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #NVIDIA #Funit
This episode has been supported by Lambda Labs. Dear Fellow Scholars, this is two-minute papers with Karo Jornaifahir. Let's talk about a great recent development in image translation. Image translation means that some image goes in and it is translated into an analogous image of a different class. A good example of this would be when we have a standing tiger as an input and we ask the algorithm to translate this image into the same tiger lying down. This leads to many amazing applications. For instance, we can specify a daytime image and get the same scene during nighttime. We can go from maps to satellite images, from video games to reality and more. However, much like many learning algorithms today, most of these techniques have a key limitation. They need a lot of training data or, in other words, these neural networks require seeing a ton of images in all of these classes before they can learn to meaningfully translate between them. This is clearly inferior to how humans think, right? If I would show you a horse, you could easily imagine and some of you could even draw what it would look like if it were a zebra instead. As I'm sure you have noticed by reading arguments on many internet forums, humans are pretty good at generalization. So, how could we possibly develop a learning technique that can look at very few images and obtain knowledge from them that generalizes well? Have a look at this crazy new paper from scientists at Nvidia that accomplishes exactly that. In this example, they show an input image of a golden retriever and then we specify the target classes by showing them a bunch of different animal breeds and look, in goes your golden and out comes a pug or any other dog breed you can think of. And now, hold on to your papers because this AI doesn't have access to these target images and it had only seen them the very first time as we just gave it to them. It can do this translation with previously unseen object classes. How is this insanity even possible? This work contains a generative adversarial network which assumes that the training set we give it contains images of different animals and what it does during training is practicing the translation process between these animals. It also contains a class encoder that creates a low dimensional latent space for each of these classes which means that it tries to compress these images down to a few features that contain the essence of these individual dog breeds. Apparently, it can learn the essence of these classes really well because it was able to convert our image into a pug without ever seeing a pug other than this one target image. As you can see here, it comes out way ahead of previous techniques but of course, if we give it a target image that is dramatically different than anything the AI has seen before, it may falter. Luckily, you can even try it yourself through this web demo which works on pets so make sure to read the instructions carefully and let the experiments begin. In fact, due to popular requests, let me kick this off with Lisa, my favorite chew hour. I got many tempting alternatives but worry not, in reality she will stay as is. I was also curious about trying a non-traditional head position and as you see with the results, this was a much more challenging case for the AI. The paper also discusses this limitation in more detail. You know the saying, two more papers down the line and I am sure this will also be remedied. I am hoping that you will also try your own pets and as a fellow scholar, you will flood the comments section here with your findings. Strictly for science, of course. If you are doing deep learning, make sure to look into Lambda GPU systems. Lambda offers workstations, servers, laptops and the GPU cloud for deep learning. You can save up to 90% over AWS, GCP and Azure GPU instances. Every Lambda GPU system is pre-installed with TensorFlow, PyTorch and Carras. Just plug it in and start training. Lambda customers include Apple, Microsoft and Stanford. Go to LambdaLabs.com, slash papers, or click the link in the video description to learn more. Big thanks to Lambda for supporting two minute papers and helping us make better videos. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 3.08, "text": " This episode has been supported by Lambda Labs."}, {"start": 3.08, "end": 7.12, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Jornaifahir."}, {"start": 7.12, "end": 11.26, "text": " Let's talk about a great recent development in image translation."}, {"start": 11.26, "end": 18.92, "text": " Image translation means that some image goes in and it is translated into an analogous image of a different class."}, {"start": 18.92, "end": 23.46, "text": " A good example of this would be when we have a standing tiger as an input"}, {"start": 23.46, "end": 29.2, "text": " and we ask the algorithm to translate this image into the same tiger lying down."}, {"start": 29.2, "end": 31.82, "text": " This leads to many amazing applications."}, {"start": 31.82, "end": 37.86, "text": " For instance, we can specify a daytime image and get the same scene during nighttime."}, {"start": 37.86, "end": 41.6, "text": " We can go from maps to satellite images,"}, {"start": 41.6, "end": 44.82, "text": " from video games to reality and more."}, {"start": 44.82, "end": 51.18, "text": " However, much like many learning algorithms today, most of these techniques have a key limitation."}, {"start": 51.18, "end": 54.879999999999995, "text": " They need a lot of training data or, in other words,"}, {"start": 54.88, "end": 59.82, "text": " these neural networks require seeing a ton of images in all of these classes"}, {"start": 59.82, "end": 63.36, "text": " before they can learn to meaningfully translate between them."}, {"start": 63.36, "end": 67.56, "text": " This is clearly inferior to how humans think, right?"}, {"start": 67.56, "end": 70.92, "text": " If I would show you a horse, you could easily imagine"}, {"start": 70.92, "end": 75.56, "text": " and some of you could even draw what it would look like if it were a zebra instead."}, {"start": 75.56, "end": 79.66, "text": " As I'm sure you have noticed by reading arguments on many internet forums,"}, {"start": 79.66, "end": 82.80000000000001, "text": " humans are pretty good at generalization."}, {"start": 82.8, "end": 86.24, "text": " So, how could we possibly develop a learning technique"}, {"start": 86.24, "end": 92.0, "text": " that can look at very few images and obtain knowledge from them that generalizes well?"}, {"start": 92.0, "end": 97.72, "text": " Have a look at this crazy new paper from scientists at Nvidia that accomplishes exactly that."}, {"start": 97.72, "end": 101.88, "text": " In this example, they show an input image of a golden retriever"}, {"start": 101.88, "end": 108.24, "text": " and then we specify the target classes by showing them a bunch of different animal breeds"}, {"start": 108.24, "end": 116.0, "text": " and look, in goes your golden and out comes a pug or any other dog breed you can think of."}, {"start": 116.0, "end": 122.0, "text": " And now, hold on to your papers because this AI doesn't have access to these target images"}, {"start": 122.0, "end": 126.72, "text": " and it had only seen them the very first time as we just gave it to them."}, {"start": 126.72, "end": 131.44, "text": " It can do this translation with previously unseen object classes."}, {"start": 131.44, "end": 134.72, "text": " How is this insanity even possible?"}, {"start": 134.72, "end": 140.24, "text": " This work contains a generative adversarial network which assumes that the training set we give it"}, {"start": 140.24, "end": 146.72, "text": " contains images of different animals and what it does during training is practicing the translation process"}, {"start": 146.72, "end": 148.16, "text": " between these animals."}, {"start": 148.16, "end": 154.56, "text": " It also contains a class encoder that creates a low dimensional latent space for each of these classes"}, {"start": 154.56, "end": 158.56, "text": " which means that it tries to compress these images down to a few features"}, {"start": 158.56, "end": 162.4, "text": " that contain the essence of these individual dog breeds."}, {"start": 162.4, "end": 166.64000000000001, "text": " Apparently, it can learn the essence of these classes really well"}, {"start": 166.64000000000001, "end": 171.6, "text": " because it was able to convert our image into a pug without ever seeing a pug"}, {"start": 171.6, "end": 173.44, "text": " other than this one target image."}, {"start": 174.16, "end": 178.32, "text": " As you can see here, it comes out way ahead of previous techniques"}, {"start": 178.32, "end": 182.8, "text": " but of course, if we give it a target image that is dramatically different"}, {"start": 182.8, "end": 186.0, "text": " than anything the AI has seen before, it may falter."}, {"start": 186.8, "end": 190.4, "text": " Luckily, you can even try it yourself through this web demo"}, {"start": 190.4, "end": 194.24, "text": " which works on pets so make sure to read the instructions carefully"}, {"start": 194.24, "end": 196.8, "text": " and let the experiments begin."}, {"start": 196.8, "end": 202.24, "text": " In fact, due to popular requests, let me kick this off with Lisa, my favorite chew hour."}, {"start": 203.36, "end": 208.64000000000001, "text": " I got many tempting alternatives but worry not, in reality she will stay as is."}, {"start": 209.44, "end": 213.20000000000002, "text": " I was also curious about trying a non-traditional head position"}, {"start": 213.20000000000002, "end": 217.76, "text": " and as you see with the results, this was a much more challenging case for the AI."}, {"start": 217.76, "end": 221.28, "text": " The paper also discusses this limitation in more detail."}, {"start": 221.28, "end": 224.16, "text": " You know the saying, two more papers down the line"}, {"start": 224.16, "end": 226.72, "text": " and I am sure this will also be remedied."}, {"start": 226.72, "end": 229.67999999999998, "text": " I am hoping that you will also try your own pets"}, {"start": 229.67999999999998, "end": 234.48, "text": " and as a fellow scholar, you will flood the comments section here with your findings."}, {"start": 234.48, "end": 236.72, "text": " Strictly for science, of course."}, {"start": 236.72, "end": 241.12, "text": " If you are doing deep learning, make sure to look into Lambda GPU systems."}, {"start": 241.12, "end": 247.35999999999999, "text": " Lambda offers workstations, servers, laptops and the GPU cloud for deep learning."}, {"start": 247.36, "end": 254.08, "text": " You can save up to 90% over AWS, GCP and Azure GPU instances."}, {"start": 254.08, "end": 259.52000000000004, "text": " Every Lambda GPU system is pre-installed with TensorFlow, PyTorch and Carras."}, {"start": 259.52000000000004, "end": 262.0, "text": " Just plug it in and start training."}, {"start": 262.0, "end": 266.32, "text": " Lambda customers include Apple, Microsoft and Stanford."}, {"start": 266.32, "end": 272.96000000000004, "text": " Go to LambdaLabs.com, slash papers, or click the link in the video description to learn more."}, {"start": 272.96, "end": 277.84, "text": " Big thanks to Lambda for supporting two minute papers and helping us make better videos."}, {"start": 277.84, "end": 307.67999999999995, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=QPwhEnAILa0
Should AI Research Try to Model the Human Brain?
❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers ₿ Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 1a5ttKiVQiDcr9j8JT2DoHGzLG7XTJccX › Bitcoin Cash: qzy42yt06xqr5f83khnhcxm2mf53cllvtytyp6ndw5 › Ethereum: 0xbBD767C0e14be1886c6610bf3F592A91D866d380 › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg 📝 The paper "Reinforcement Learning, Fast and Slow" is available here: https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(19)30061-0 The Bitter Lesson: https://www.youtube.com/watch?v=wEgq6sT1uq8 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher. AI research has come a long, long way in the last few years. Not so long ago, we were lucky if we could train a neural network to understand traffic signs and since then, so many things happened. By harnessing the power of learning algorithms, we are now able to impersonate other people by using a consumer camera, generate high-quality virtual human faces for people that don't exist or pretend to be able to dance as a pro dancer by using an external video footage and transferring it onto ourselves. Even though we are progressing at a staggering pace, there is a lot of debate as to which research direction is the most promising going forward. Roughly speaking, there are two schools of thought. One, we recently talked about Richard Sutton's amazing article by the name The Better Lesson, in which he makes a great argument that AI research should not try to mimic the way the human brain works. He argues that instead, all we need to do is formulate our problems in a general manner so that our learning algorithm may find something that is potentially much better suited for a problem than our brain is. I put a link to this video in the description if you're interested. And two, a different school of thought says that we should look at all these learning algorithms that use a lot of powerful hardware and can do wondrous things like playing a bunch of Atari games at a superhuman level. However, they learn orders of magnitude slower than the human brain does, so it should definitely be worth it to try to study and model the human brain at least until we can match it in terms of efficiency. This school of thought is what we are going to talk about in this video. As an example, let's take a look at deep reinforcement learning in the context of playing computer games. This technique is a combination of a neural network that processes the visual data that we see on the screen and the reinforcement learner that comes up with the gameplay-related decisions. Absolutely amazing algorithm, a true breakthrough in AI research. Very powerful, however, also quite slow. And by slow, I mean that we can sit for an hour in front of our computer and wonder why our learner does not work at all because it loses all of its lives almost immediately. If we remain patient, we find out that it works, it just learns at a glacial pace. So why is this so slow? Well, two reasons. Reason number one is that the learning happens through incremental parameter adjustment. If a human fails really badly at a task, the human would know that a drastic adjustment to the strategy is necessary, while the deep reinforcement learner would start applying tiny, tiny changes to its behavior and test again if things got better. This takes a while and as a result seems unlikely to have a close relation to how we humans think. The second reason for it being slow is the presence of weak inductive bias. This means that the learner does not contain any information about the problem we have at hand or in other words, has never seen the game we are playing before and has no other previous knowledge about games at all. This is desirable in some cases because we can reuse one learning algorithm for a variety of problems. However, because this way, the AI has to test a stupendously large number of potential hypotheses about the game, we will have to pay for this convenience by using a mighty inefficient algorithm. But is this all really true? Does deep reinforcement learning really have to be so slow? And what on earth does this have to do with our brain? Well, this paper proposes an interesting counter argument that this is not necessarily true and argues that with a few changes, the efficiency of deep reinforcement learning may be drastically improved and get this. It also tells us that these changes are also possibly based in neuroscience. One such change is using episodic memory, which stores previous experiences to help estimating the potential value of different actions, and this way, drastic parameter adjustments become a possibility. And it not only improves the efficiency, but there is more to it because there are recent studies that show that using episodic memory indeed contributes to the learning of real humans and animals alike. And two, it is beneficial to let the AI implement its own reinforcement learning algorithm, a concept often referred to as learning to learn or met our reinforcement learning. This also helps obtaining more general knowledge that can be reused across tasks further improving the efficiency of the agent. Here you see a picture of an FMRI and some regions are marked with yellow and orange here. What could this possibly mean? Well, hold on to your papers because these highlight neural structures that implement a very similar metary reinforcement learning scheme within the human brain. It turns out that metary enforcement learning or this learning to learn scheme may not just be something that speeds up our AI algorithms, but maybe a fundamental principle of the human brain as well. So these two changes to the pre-enforcement learning not only drastically improve its efficiency, but it also suddenly maps quite a bit better to our brain. How cool is that? So which school of thought are you most fond of? Should we model the brain or should we listen to Richard Sutton's bitter lesson? Let me know in the comments. Also make sure to have a look at the paper. I found it to be quite readable and you really don't need to be a neuroscientist to read it and learn quite a few new things. Make sure to have a look at it in the video description. Now, I think you noticed that this paper doesn't contain the usual visual fireworks and is more complex than your average two-minute papers video and hence I expected to get significantly less views. That's not a great business model, but you know what? I made this channel so I can share with you all these important lessons that I learned during my journey. This has been a true privilege and I am thrilled that I am still able to talk about all these amazing papers without worrying too much whether any of these videos will go viral or not. This has only been possible because of your unwavering support on patreon.com slash two-minute papers. If you feel like chipping in, please click the Patreon link in the video description. And if you are more like a crypto person, we also support cryptocurrencies like Bitcoin, Ethereum and Litecoin, the addresses are also available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.12, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifeher."}, {"start": 4.12, "end": 8.120000000000001, "text": " AI research has come a long, long way in the last few years."}, {"start": 8.120000000000001, "end": 12.92, "text": " Not so long ago, we were lucky if we could train a neural network to understand traffic"}, {"start": 12.92, "end": 16.88, "text": " signs and since then, so many things happened."}, {"start": 16.88, "end": 22.28, "text": " By harnessing the power of learning algorithms, we are now able to impersonate other people"}, {"start": 22.28, "end": 28.48, "text": " by using a consumer camera, generate high-quality virtual human faces for people that don't"}, {"start": 28.48, "end": 35.64, "text": " exist or pretend to be able to dance as a pro dancer by using an external video footage"}, {"start": 35.64, "end": 38.56, "text": " and transferring it onto ourselves."}, {"start": 38.56, "end": 42.94, "text": " Even though we are progressing at a staggering pace, there is a lot of debate as to which"}, {"start": 42.94, "end": 46.96, "text": " research direction is the most promising going forward."}, {"start": 46.96, "end": 49.72, "text": " Roughly speaking, there are two schools of thought."}, {"start": 49.72, "end": 56.040000000000006, "text": " One, we recently talked about Richard Sutton's amazing article by the name The Better Lesson,"}, {"start": 56.04, "end": 61.44, "text": " in which he makes a great argument that AI research should not try to mimic the way"}, {"start": 61.44, "end": 63.16, "text": " the human brain works."}, {"start": 63.16, "end": 68.64, "text": " He argues that instead, all we need to do is formulate our problems in a general manner"}, {"start": 68.64, "end": 74.2, "text": " so that our learning algorithm may find something that is potentially much better suited for"}, {"start": 74.2, "end": 76.32, "text": " a problem than our brain is."}, {"start": 76.32, "end": 79.88, "text": " I put a link to this video in the description if you're interested."}, {"start": 79.88, "end": 83.92, "text": " And two, a different school of thought says that we should look at all these learning"}, {"start": 83.92, "end": 89.72, "text": " algorithms that use a lot of powerful hardware and can do wondrous things like playing a bunch"}, {"start": 89.72, "end": 92.52, "text": " of Atari games at a superhuman level."}, {"start": 92.52, "end": 98.52, "text": " However, they learn orders of magnitude slower than the human brain does, so it should definitely"}, {"start": 98.52, "end": 105.0, "text": " be worth it to try to study and model the human brain at least until we can match it in terms"}, {"start": 105.0, "end": 106.48, "text": " of efficiency."}, {"start": 106.48, "end": 110.12, "text": " This school of thought is what we are going to talk about in this video."}, {"start": 110.12, "end": 115.72, "text": " As an example, let's take a look at deep reinforcement learning in the context of playing computer"}, {"start": 115.72, "end": 116.72, "text": " games."}, {"start": 116.72, "end": 121.44, "text": " This technique is a combination of a neural network that processes the visual data that"}, {"start": 121.44, "end": 126.04, "text": " we see on the screen and the reinforcement learner that comes up with the gameplay-related"}, {"start": 126.04, "end": 127.04, "text": " decisions."}, {"start": 127.04, "end": 132.32, "text": " Absolutely amazing algorithm, a true breakthrough in AI research."}, {"start": 132.32, "end": 136.24, "text": " Very powerful, however, also quite slow."}, {"start": 136.24, "end": 141.64000000000001, "text": " And by slow, I mean that we can sit for an hour in front of our computer and wonder why"}, {"start": 141.64000000000001, "end": 147.32000000000002, "text": " our learner does not work at all because it loses all of its lives almost immediately."}, {"start": 147.32000000000002, "end": 152.84, "text": " If we remain patient, we find out that it works, it just learns at a glacial pace."}, {"start": 152.84, "end": 155.24, "text": " So why is this so slow?"}, {"start": 155.24, "end": 157.52, "text": " Well, two reasons."}, {"start": 157.52, "end": 162.84, "text": " Reason number one is that the learning happens through incremental parameter adjustment."}, {"start": 162.84, "end": 167.92000000000002, "text": " If a human fails really badly at a task, the human would know that a drastic adjustment"}, {"start": 167.92000000000002, "end": 172.84, "text": " to the strategy is necessary, while the deep reinforcement learner would start applying"}, {"start": 172.84, "end": 178.6, "text": " tiny, tiny changes to its behavior and test again if things got better."}, {"start": 178.6, "end": 184.08, "text": " This takes a while and as a result seems unlikely to have a close relation to how we humans"}, {"start": 184.08, "end": 185.08, "text": " think."}, {"start": 185.08, "end": 190.16, "text": " The second reason for it being slow is the presence of weak inductive bias."}, {"start": 190.16, "end": 194.28, "text": " This means that the learner does not contain any information about the problem we have at"}, {"start": 194.28, "end": 200.4, "text": " hand or in other words, has never seen the game we are playing before and has no other"}, {"start": 200.4, "end": 203.32, "text": " previous knowledge about games at all."}, {"start": 203.32, "end": 208.64, "text": " This is desirable in some cases because we can reuse one learning algorithm for a variety"}, {"start": 208.64, "end": 209.64, "text": " of problems."}, {"start": 209.64, "end": 215.35999999999999, "text": " However, because this way, the AI has to test a stupendously large number of potential"}, {"start": 215.36, "end": 221.76000000000002, "text": " hypotheses about the game, we will have to pay for this convenience by using a mighty inefficient"}, {"start": 221.76000000000002, "end": 223.08, "text": " algorithm."}, {"start": 223.08, "end": 225.4, "text": " But is this all really true?"}, {"start": 225.4, "end": 229.12, "text": " Does deep reinforcement learning really have to be so slow?"}, {"start": 229.12, "end": 232.12, "text": " And what on earth does this have to do with our brain?"}, {"start": 232.12, "end": 237.52, "text": " Well, this paper proposes an interesting counter argument that this is not necessarily"}, {"start": 237.52, "end": 243.4, "text": " true and argues that with a few changes, the efficiency of deep reinforcement learning"}, {"start": 243.4, "end": 246.56, "text": " may be drastically improved and get this."}, {"start": 246.56, "end": 252.24, "text": " It also tells us that these changes are also possibly based in neuroscience."}, {"start": 252.24, "end": 258.72, "text": " One such change is using episodic memory, which stores previous experiences to help estimating"}, {"start": 258.72, "end": 263.88, "text": " the potential value of different actions, and this way, drastic parameter adjustments"}, {"start": 263.88, "end": 265.88, "text": " become a possibility."}, {"start": 265.88, "end": 270.4, "text": " And it not only improves the efficiency, but there is more to it because there are recent"}, {"start": 270.4, "end": 276.03999999999996, "text": " studies that show that using episodic memory indeed contributes to the learning of real"}, {"start": 276.03999999999996, "end": 278.67999999999995, "text": " humans and animals alike."}, {"start": 278.67999999999995, "end": 284.59999999999997, "text": " And two, it is beneficial to let the AI implement its own reinforcement learning algorithm, a"}, {"start": 284.59999999999997, "end": 290.08, "text": " concept often referred to as learning to learn or met our reinforcement learning."}, {"start": 290.08, "end": 295.76, "text": " This also helps obtaining more general knowledge that can be reused across tasks further improving"}, {"start": 295.76, "end": 298.03999999999996, "text": " the efficiency of the agent."}, {"start": 298.04, "end": 303.28000000000003, "text": " Here you see a picture of an FMRI and some regions are marked with yellow and orange"}, {"start": 303.28000000000003, "end": 304.48, "text": " here."}, {"start": 304.48, "end": 306.40000000000003, "text": " What could this possibly mean?"}, {"start": 306.40000000000003, "end": 311.88, "text": " Well, hold on to your papers because these highlight neural structures that implement"}, {"start": 311.88, "end": 316.56, "text": " a very similar metary reinforcement learning scheme within the human brain."}, {"start": 316.56, "end": 321.08000000000004, "text": " It turns out that metary enforcement learning or this learning to learn scheme may not"}, {"start": 321.08000000000004, "end": 326.72, "text": " just be something that speeds up our AI algorithms, but maybe a fundamental principle of the"}, {"start": 326.72, "end": 328.44000000000005, "text": " human brain as well."}, {"start": 328.44000000000005, "end": 332.76000000000005, "text": " So these two changes to the pre-enforcement learning not only drastically improve its"}, {"start": 332.76000000000005, "end": 338.04, "text": " efficiency, but it also suddenly maps quite a bit better to our brain."}, {"start": 338.04, "end": 340.04, "text": " How cool is that?"}, {"start": 340.04, "end": 343.20000000000005, "text": " So which school of thought are you most fond of?"}, {"start": 343.20000000000005, "end": 348.12, "text": " Should we model the brain or should we listen to Richard Sutton's bitter lesson?"}, {"start": 348.12, "end": 349.72, "text": " Let me know in the comments."}, {"start": 349.72, "end": 351.88000000000005, "text": " Also make sure to have a look at the paper."}, {"start": 351.88000000000005, "end": 356.6, "text": " I found it to be quite readable and you really don't need to be a neuroscientist to read"}, {"start": 356.6, "end": 359.20000000000005, "text": " it and learn quite a few new things."}, {"start": 359.20000000000005, "end": 361.76000000000005, "text": " Make sure to have a look at it in the video description."}, {"start": 361.76000000000005, "end": 367.20000000000005, "text": " Now, I think you noticed that this paper doesn't contain the usual visual fireworks and"}, {"start": 367.20000000000005, "end": 373.16, "text": " is more complex than your average two-minute papers video and hence I expected to get significantly"}, {"start": 373.16, "end": 374.16, "text": " less views."}, {"start": 374.16, "end": 377.0, "text": " That's not a great business model, but you know what?"}, {"start": 377.0, "end": 381.56, "text": " I made this channel so I can share with you all these important lessons that I learned"}, {"start": 381.56, "end": 382.84000000000003, "text": " during my journey."}, {"start": 382.84, "end": 387.71999999999997, "text": " This has been a true privilege and I am thrilled that I am still able to talk about all these"}, {"start": 387.71999999999997, "end": 393.91999999999996, "text": " amazing papers without worrying too much whether any of these videos will go viral or not."}, {"start": 393.91999999999996, "end": 400.03999999999996, "text": " This has only been possible because of your unwavering support on patreon.com slash two-minute"}, {"start": 400.03999999999996, "end": 401.03999999999996, "text": " papers."}, {"start": 401.03999999999996, "end": 405.79999999999995, "text": " If you feel like chipping in, please click the Patreon link in the video description."}, {"start": 405.79999999999995, "end": 410.59999999999997, "text": " And if you are more like a crypto person, we also support cryptocurrencies like Bitcoin,"}, {"start": 410.6, "end": 415.48, "text": " Ethereum and Litecoin, the addresses are also available in the video description."}, {"start": 415.48, "end": 444.0, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=C6nonNRoF7g
This is How Google’s Phone Enhances Your Photos
📝 The paper "Handheld Multi-frame Super-resolution" is available here: https://sites.google.com/view/handheld-super-res/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Ivelin Ivanov, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #NightSight #GooglePixel
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Super Resolution is a research field with a ton of published papers every year where the simplest problem formulation is that we have a low resolution course image as an input and we wish to enhance it to get a crisper, higher resolution image. You know the thing that can always be done immediately and perfectly in many of these detective TV series. And yes, sure, the whole idea of super resolution sounds a little like science fiction. How could I possibly get more content onto an image that's not already there? How would an algorithm know what a blurry text means if it's unreadable? It can't just guess what somebody wrote there, can it? Well, let's see. This paper provides an interesting take on this topic because it rejects the idea of having just one image as an input. You see, in this day and age, we have powerful mobile processors in our phones and when we point our phone camera and take an image, it doesn't just take one, but a series of images. Most people don't know that some of these images are even taken as soon as we open our camera app without even pushing the shoot button. Working with a batch of images is also the basis of the iPhone's beloved live photo feature. So as a result, this method builds on this raw burst input with multiple images and doesn't need idealized conditions to work properly, which means that it can process footage that we shoot with our shaky hands. In fact, it forges an advantage out of this imperfection because it can first align these photos and then we have not one image, but a bunch of images with slight changes in view point. This means that we have more information that we can extract from these several images, which can be stitched together into one higher quality output image. Now that's an amazing idea if I've ever seen one. It not only acknowledges the limitations of real world usage, but even takes advantage of it. Brilliant. You see throughout this video that the results look heavenly. However, not every kind of motion is desirable. If we have a more complex motion, such as the one you see here as we move away from the scene, this can lead to unwanted artifacts in the reconstruction. Luckily, the method is able to detect these cases by building a robustness mask that highlights which are the regions that will likely lead to these unwanted artifacts. Whatever is deemed to be low quality information in this mask is ultimately rejected, leading to high quality outputs even in the presence of weird motions. And now hold on to your papers because this method does not use neural networks or any learning techniques and these orders of magnitude faster than those while providing higher quality images. As a result, the entirety of the process takes only 100 milliseconds to process a really detailed 12 megapixel image, which means that it can do it 10 times every second. These are interactive frame rates and it seems that doing this in real time is going to be possible within the near future. Huge congratulations to Bart and his team at Google for out muscling the neural networks. Luckily, higher quality ground truth data can also be easily produced for this project, creating a nice baseline to compare the results to. Here you see that this new method is much closer to this ground truth than previous techniques. As an additional corollary of this solution, the more of these jerky frames we can collect, the better it can reconstruct images in poor lighting conditions, which is typically one of the more desirable features in today's smartphones. In fact, get this. This is the method behind Google's magical night sight and super rest zoom features that you can access by using their Pixel 3 flagship phones. When this feature came out, I remember that phone reviewers and everyone unaware of the rate of progress in computer graphics research were absolutely floored by the results and could hardly believe their eyes when they first tried it. And I don't blame them. This is a truly incredible piece of work. Make sure to have a look at the paper that contains a ton of comparisons against other methods and it also shows the relation between the number of collected burst frames and the output quality we can expect as a result and more. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.6000000000000005, "end": 9.38, "text": " Super Resolution is a research field with a ton of published papers every year where the"}, {"start": 9.38, "end": 15.0, "text": " simplest problem formulation is that we have a low resolution course image as an input"}, {"start": 15.0, "end": 19.52, "text": " and we wish to enhance it to get a crisper, higher resolution image."}, {"start": 19.52, "end": 24.22, "text": " You know the thing that can always be done immediately and perfectly in many of these"}, {"start": 24.22, "end": 26.240000000000002, "text": " detective TV series."}, {"start": 26.24, "end": 32.519999999999996, "text": " And yes, sure, the whole idea of super resolution sounds a little like science fiction."}, {"start": 32.519999999999996, "end": 37.76, "text": " How could I possibly get more content onto an image that's not already there?"}, {"start": 37.76, "end": 42.26, "text": " How would an algorithm know what a blurry text means if it's unreadable?"}, {"start": 42.26, "end": 45.28, "text": " It can't just guess what somebody wrote there, can it?"}, {"start": 45.28, "end": 47.14, "text": " Well, let's see."}, {"start": 47.14, "end": 51.92, "text": " This paper provides an interesting take on this topic because it rejects the idea of"}, {"start": 51.92, "end": 54.54, "text": " having just one image as an input."}, {"start": 54.54, "end": 59.42, "text": " You see, in this day and age, we have powerful mobile processors in our phones and when we"}, {"start": 59.42, "end": 64.94, "text": " point our phone camera and take an image, it doesn't just take one, but a series of"}, {"start": 64.94, "end": 66.34, "text": " images."}, {"start": 66.34, "end": 71.03999999999999, "text": " Most people don't know that some of these images are even taken as soon as we open our"}, {"start": 71.03999999999999, "end": 74.78, "text": " camera app without even pushing the shoot button."}, {"start": 74.78, "end": 80.3, "text": " Working with a batch of images is also the basis of the iPhone's beloved live photo feature."}, {"start": 80.3, "end": 86.06, "text": " So as a result, this method builds on this raw burst input with multiple images and doesn't"}, {"start": 86.06, "end": 91.1, "text": " need idealized conditions to work properly, which means that it can process footage that"}, {"start": 91.1, "end": 93.7, "text": " we shoot with our shaky hands."}, {"start": 93.7, "end": 99.53999999999999, "text": " In fact, it forges an advantage out of this imperfection because it can first align"}, {"start": 99.53999999999999, "end": 105.58, "text": " these photos and then we have not one image, but a bunch of images with slight changes"}, {"start": 105.58, "end": 107.1, "text": " in view point."}, {"start": 107.1, "end": 111.58, "text": " This means that we have more information that we can extract from these several images,"}, {"start": 111.58, "end": 116.53999999999999, "text": " which can be stitched together into one higher quality output image."}, {"start": 116.53999999999999, "end": 119.86, "text": " Now that's an amazing idea if I've ever seen one."}, {"start": 119.86, "end": 125.06, "text": " It not only acknowledges the limitations of real world usage, but even takes advantage"}, {"start": 125.06, "end": 126.06, "text": " of it."}, {"start": 126.06, "end": 127.06, "text": " Brilliant."}, {"start": 127.06, "end": 131.1, "text": " You see throughout this video that the results look heavenly."}, {"start": 131.1, "end": 135.9, "text": " However, not every kind of motion is desirable."}, {"start": 135.9, "end": 140.18, "text": " If we have a more complex motion, such as the one you see here as we move away from"}, {"start": 140.18, "end": 144.58, "text": " the scene, this can lead to unwanted artifacts in the reconstruction."}, {"start": 144.58, "end": 151.14000000000001, "text": " Luckily, the method is able to detect these cases by building a robustness mask that highlights"}, {"start": 151.14000000000001, "end": 155.78, "text": " which are the regions that will likely lead to these unwanted artifacts."}, {"start": 155.78, "end": 161.26, "text": " Whatever is deemed to be low quality information in this mask is ultimately rejected, leading"}, {"start": 161.26, "end": 166.1, "text": " to high quality outputs even in the presence of weird motions."}, {"start": 166.1, "end": 171.34, "text": " And now hold on to your papers because this method does not use neural networks or any"}, {"start": 171.34, "end": 177.22, "text": " learning techniques and these orders of magnitude faster than those while providing higher quality"}, {"start": 177.22, "end": 178.42, "text": " images."}, {"start": 178.42, "end": 184.1, "text": " As a result, the entirety of the process takes only 100 milliseconds to process a really"}, {"start": 184.1, "end": 190.42, "text": " detailed 12 megapixel image, which means that it can do it 10 times every second."}, {"start": 190.42, "end": 194.7, "text": " These are interactive frame rates and it seems that doing this in real time is going to"}, {"start": 194.7, "end": 197.22, "text": " be possible within the near future."}, {"start": 197.22, "end": 202.82, "text": " Huge congratulations to Bart and his team at Google for out muscling the neural networks."}, {"start": 202.82, "end": 208.38, "text": " Luckily, higher quality ground truth data can also be easily produced for this project,"}, {"start": 208.38, "end": 211.77999999999997, "text": " creating a nice baseline to compare the results to."}, {"start": 211.77999999999997, "end": 216.94, "text": " Here you see that this new method is much closer to this ground truth than previous techniques."}, {"start": 216.94, "end": 222.02, "text": " As an additional corollary of this solution, the more of these jerky frames we can collect,"}, {"start": 222.02, "end": 226.94, "text": " the better it can reconstruct images in poor lighting conditions, which is typically"}, {"start": 226.94, "end": 230.5, "text": " one of the more desirable features in today's smartphones."}, {"start": 230.5, "end": 232.5, "text": " In fact, get this."}, {"start": 232.5, "end": 237.86, "text": " This is the method behind Google's magical night sight and super rest zoom features that"}, {"start": 237.86, "end": 241.86, "text": " you can access by using their Pixel 3 flagship phones."}, {"start": 241.86, "end": 246.9, "text": " When this feature came out, I remember that phone reviewers and everyone unaware of the"}, {"start": 246.9, "end": 252.18, "text": " rate of progress in computer graphics research were absolutely floored by the results and"}, {"start": 252.18, "end": 255.42000000000002, "text": " could hardly believe their eyes when they first tried it."}, {"start": 255.42000000000002, "end": 256.54, "text": " And I don't blame them."}, {"start": 256.54, "end": 259.5, "text": " This is a truly incredible piece of work."}, {"start": 259.5, "end": 263.58000000000004, "text": " Make sure to have a look at the paper that contains a ton of comparisons against other"}, {"start": 263.58000000000004, "end": 269.3, "text": " methods and it also shows the relation between the number of collected burst frames and the"}, {"start": 269.3, "end": 273.42, "text": " output quality we can expect as a result and more."}, {"start": 273.42, "end": 302.06, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=JJlSgm9OByM
This Robot Throws Objects with Amazing Precision
📝 The paper "TossingBot: Learning to Throw Arbitrary Objects with Residual Physics" is available here: https://tossingbot.cs.princeton.edu/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #TossingBot
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir. In this footage, we have a variety of objects that differ in geometry and the goal is to place them into this box using an AI. Sounds simple, right? This has been solved long, long ago. However, there is a catch here, which is that this box is outside of the range of the robot arm, therefore it has to throw it in there with just the right amount of force for it to end up in this box. It can perform 500 of these tosses per hour. Before anyone misunderstands what is going on in the footage here, it almost seems like the robot on the left is helping by moving to where the object would fall after the robot on the right throws it. This is not the case. Here you see a small part of my discussion with Andy Zang, the lead author of the paper where he addresses this. The results look amazing and note that this problem is much harder than most people would think at first. In order to perform this, the AI has to understand how to grasp an object with a given geometry. In fact, we may grab the same object at a different side, throw it the same way, and there would be a great deal of difference in the trajectory of this object. Have a look at this example with a screwdriver. It also has to take into consideration the air resistance of a given object as well. Man, this problem is hard. As you see here, initially, it cannot even practice throwing because its reliability in grasping is quite poor. However, after 14 hours of training, it achieves a remarkable accuracy and to be able to train for so long, this training table is designed in a way that when running out of objects, it can restart itself without human help. Nice. To achieve this, we need a lot of training objects but not any kind of training objects. These objects have to be diversified. As you see here, during training, the box position enjoys a great variety and the object geometry is also well diversified. Normally, in these experiments, we are looking to obtain some kind of intelligence. In this case, would mean that the AI truly learned the underlying dynamics of object throwing and not just found some good solutions via trial and error. A good way to test this would be to give it an object it has never seen before and see how its knowledge generalizes to that. Same with locations. On the left, you see these boxes marked with orange. This was the training set, but later it was asked to throw it into the blue boxes, which is something it has never tried before. And look, this is excellent generalization. Bravo. You can also see the success probabilities for grasping and throwing here. A key idea in this work is that this system is endowed with a physics-based controller which contains the standard equations of linear projectile motion. This is simple knowledge from high school physics that ignores several key real life factors such as the effect of aerodynamic drag. This way, the AI does not have to learn from scratch and can use these calculations as an initial guess and it is tasked with learning to account for the difference between this basic equation and real life trajectories. In other words, it is given basic physics and is asked to learn advanced physics by building on that. Loving this idea. A simulation environment was also developed for this project where one can test the effect of, for instance, changing the gripper width which would be costly and labor intensive in the real world. Of course, these are all free in a software simulation. What a time to be alive. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir."}, {"start": 4.5200000000000005, "end": 9.84, "text": " In this footage, we have a variety of objects that differ in geometry and the goal is to"}, {"start": 9.84, "end": 13.280000000000001, "text": " place them into this box using an AI."}, {"start": 13.280000000000001, "end": 14.92, "text": " Sounds simple, right?"}, {"start": 14.92, "end": 17.28, "text": " This has been solved long, long ago."}, {"start": 17.28, "end": 22.400000000000002, "text": " However, there is a catch here, which is that this box is outside of the range of the"}, {"start": 22.400000000000002, "end": 27.52, "text": " robot arm, therefore it has to throw it in there with just the right amount of force"}, {"start": 27.52, "end": 30.12, "text": " for it to end up in this box."}, {"start": 30.12, "end": 33.84, "text": " It can perform 500 of these tosses per hour."}, {"start": 33.84, "end": 38.44, "text": " Before anyone misunderstands what is going on in the footage here, it almost seems like"}, {"start": 38.44, "end": 44.0, "text": " the robot on the left is helping by moving to where the object would fall after the robot"}, {"start": 44.0, "end": 46.0, "text": " on the right throws it."}, {"start": 46.0, "end": 47.760000000000005, "text": " This is not the case."}, {"start": 47.760000000000005, "end": 52.56, "text": " Here you see a small part of my discussion with Andy Zang, the lead author of the paper"}, {"start": 52.56, "end": 54.28, "text": " where he addresses this."}, {"start": 54.28, "end": 59.64, "text": " The results look amazing and note that this problem is much harder than most people would"}, {"start": 59.64, "end": 61.160000000000004, "text": " think at first."}, {"start": 61.160000000000004, "end": 67.88, "text": " In order to perform this, the AI has to understand how to grasp an object with a given geometry."}, {"start": 67.88, "end": 73.64, "text": " In fact, we may grab the same object at a different side, throw it the same way, and there"}, {"start": 73.64, "end": 77.68, "text": " would be a great deal of difference in the trajectory of this object."}, {"start": 77.68, "end": 80.28, "text": " Have a look at this example with a screwdriver."}, {"start": 80.28, "end": 85.28, "text": " It also has to take into consideration the air resistance of a given object as well."}, {"start": 85.28, "end": 88.44, "text": " Man, this problem is hard."}, {"start": 88.44, "end": 94.52, "text": " As you see here, initially, it cannot even practice throwing because its reliability in grasping"}, {"start": 94.52, "end": 95.6, "text": " is quite poor."}, {"start": 95.6, "end": 102.0, "text": " However, after 14 hours of training, it achieves a remarkable accuracy and to be able to train"}, {"start": 102.0, "end": 107.72, "text": " for so long, this training table is designed in a way that when running out of objects,"}, {"start": 107.72, "end": 111.03999999999999, "text": " it can restart itself without human help."}, {"start": 111.03999999999999, "end": 113.12, "text": " Nice."}, {"start": 113.12, "end": 118.72, "text": " To achieve this, we need a lot of training objects but not any kind of training objects."}, {"start": 118.72, "end": 121.56, "text": " These objects have to be diversified."}, {"start": 121.56, "end": 127.16, "text": " As you see here, during training, the box position enjoys a great variety and the object"}, {"start": 127.16, "end": 130.16, "text": " geometry is also well diversified."}, {"start": 130.16, "end": 136.36, "text": " Normally, in these experiments, we are looking to obtain some kind of intelligence."}, {"start": 136.36, "end": 142.56, "text": " In this case, would mean that the AI truly learned the underlying dynamics of object throwing"}, {"start": 142.56, "end": 146.0, "text": " and not just found some good solutions via trial and error."}, {"start": 146.0, "end": 151.32000000000002, "text": " A good way to test this would be to give it an object it has never seen before and see"}, {"start": 151.32000000000002, "end": 154.60000000000002, "text": " how its knowledge generalizes to that."}, {"start": 154.60000000000002, "end": 156.12, "text": " Same with locations."}, {"start": 156.12, "end": 159.56, "text": " On the left, you see these boxes marked with orange."}, {"start": 159.56, "end": 165.16000000000003, "text": " This was the training set, but later it was asked to throw it into the blue boxes, which"}, {"start": 165.16, "end": 168.28, "text": " is something it has never tried before."}, {"start": 168.28, "end": 172.44, "text": " And look, this is excellent generalization."}, {"start": 172.44, "end": 173.51999999999998, "text": " Bravo."}, {"start": 173.51999999999998, "end": 177.68, "text": " You can also see the success probabilities for grasping and throwing here."}, {"start": 177.68, "end": 183.0, "text": " A key idea in this work is that this system is endowed with a physics-based controller"}, {"start": 183.0, "end": 187.28, "text": " which contains the standard equations of linear projectile motion."}, {"start": 187.28, "end": 191.51999999999998, "text": " This is simple knowledge from high school physics that ignores several key real life"}, {"start": 191.52, "end": 195.60000000000002, "text": " factors such as the effect of aerodynamic drag."}, {"start": 195.60000000000002, "end": 200.44, "text": " This way, the AI does not have to learn from scratch and can use these calculations as"}, {"start": 200.44, "end": 205.08, "text": " an initial guess and it is tasked with learning to account for the difference between this"}, {"start": 205.08, "end": 208.32000000000002, "text": " basic equation and real life trajectories."}, {"start": 208.32000000000002, "end": 213.76000000000002, "text": " In other words, it is given basic physics and is asked to learn advanced physics by building"}, {"start": 213.76000000000002, "end": 214.76000000000002, "text": " on that."}, {"start": 214.76000000000002, "end": 216.4, "text": " Loving this idea."}, {"start": 216.4, "end": 221.36, "text": " A simulation environment was also developed for this project where one can test the effect"}, {"start": 221.36, "end": 226.8, "text": " of, for instance, changing the gripper width which would be costly and labor intensive"}, {"start": 226.8, "end": 228.20000000000002, "text": " in the real world."}, {"start": 228.20000000000002, "end": 231.88, "text": " Of course, these are all free in a software simulation."}, {"start": 231.88, "end": 233.36, "text": " What a time to be alive."}, {"start": 233.36, "end": 263.32, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=tfb6aEUMC04
OpenAI Five Beats World Champion DOTA2 Team 2-0! 🤖
Check out Lambda Labs here: https://lambdalabs.com/papers OpenAI's blog post: https://openai.com/blog/openai-five-finals/ Reddit AMA: https://old.reddit.com/r/DotA2/comments/bf49yk/hello_were_the_dev_team_behind_openai_five_we/ Reddit discussion on buybacks: https://old.reddit.com/r/DotA2/comments/bcx8cf/i_think_the_openai_games_revealed_an_invisible/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #OpenAIFive, #DOTA2
Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifehir. This episode has been sponsored by Lambda Labs. Not so long ago, we talked about DeepMind's Alpha Star, an AI that was able to defeat top tier human players in Starcraft 2, a complex real-time strategy game. Of course, I love talking about AI's that are developed to challenge pro gamers at a variety of difficult games, so this time around we'll have a look at another major milestone, Open AI 5, which is an AI that plays Dota 2, a multiplayer online battle arena game with a huge cult following. As this game requires long-term strategic planning, it is a classic nightmare scenario for any AI. But, Open AI is no stranger to Dota 2, in 2017 they showed us an initial version of their AI that was able to play one versus one games with only one hero and was able to reliably be Dandy, a world champion player. That was quite an achievement, however, of course this was meant to be a stepping stone towards playing the real Dota 2. Then in 2018 they unveiled Open AI 5, an improved version of this AI that played 5 vs 5 games with a limited hero pool. This team was able to defeat competent players but was still not quite at the level of a world champion human team. In a 1 hour interview, the Open AI research team mentioned that due to the deadline of the international event, they had to make quite a few concessions. And this time several things have changed. First, they didn't just challenge some local team of formidable players, no no, they flat out challenged OG, the reigning world champion team, an ambitious move that exudes confidence from their side. Second, this time around, there was no tight deadline as the date of the challenge was chosen by Open AI. Let's quickly talk about the rules of the competition and then see if Open AI's confident move was justified. These learning agents don't look at the pixels of the game and as a result they see the world as a big bunch of numbers. And this time around, it was able to play a pool of 17 heroes and trained against itself for millions and millions of games. And now, let's have a look at what happened in this best of 3 series. In match 1, right after picking the roster of heroes, the AI estimated its win probability to be 67% so it was quite a surprise that early on, it looked like Open AI's bots were running around aimlessly. Over time, we found out that it was not at all the case. It plays unusually aggressively from the gap goal and uses buybacks quite liberally at times where human players don't really consider it to be a good choice. These buybacks resurrect a perished hero quickly but in return cost money. Later, it became clearer that these bots are no joke. They know exactly when to engage and when to back out from an engagement with the smallest sliver of health left. I will show quite a few examples of those to you during this video, so stay tuned. A little less than 20 minutes in, we had a very even game 1, if anything, Open AI seemed a tiny bit behind and someone noted that we should perhaps ask the bots what they think about their chances. And then the AI said, yeah, no worries, we have a higher than 95% chance to win the game. This was such a pivotal moment that was very surprising for everyone. Of course, if you call out to win with confidence, you better go all the way and indeed win the game. Right? Right. And sure enough, they wiped out almost the entire world champion team of the human players immediately after. No tell's pushed all the way back. Officials will come out to hold them down but Open AI, they've got two more kills. And then noted, you know what? Remember that we just said? Forget that. We estimate our chances to win to be above 99% now. And shortly after, they won match number one. Can you believe this? This is absolutely amazing. Interestingly, one of the developers said that the AI is great at assessing whether a fight is worth it. As an interesting corollary, if you engage with it and it fights you, it probably means that you are going to lose. That must be quite confusing for the players. Some mind games for you. Love it. At the event, it was also such a joy to see such a receptive audience that understood and appreciated high level plays. One words to match number two. Right after the draft, which is the process of choosing the heroes for each team, the AI predicted a win percentage that was much closer this time around, around 60%. In this game, the AI turned up the heat real fast and said just five minutes into the game, which is nothing that it has an over 80% chance to win the game. And now, watch this. In the game, you can see a great example of where the AI just gets away with a sliver of health. Look at this guy. They will find the follow-up wrap around Killon towards him, at least they are trying, but with that walk right out, he is able to run away. Another fish comes out. Surely this kill is going to be there, but no, a stun from the span holds back the shaker and he teepes out on 30 HP. Open AI gets out of there with the span, they cannot get that kill OG. Look at that. This is either an accident or some unreal level foresight from the side of this agent. I'd love to hear your opinion on which one you think it is. By the 9.5 minute mark, which is still really early, Open AI 5 said, yes, we got this one too. Over 95%. Here you see an interesting scenario where the AI loses one hero, but it almost immediately kills two of the human heroes and comes out favorably, at which point we wonder whether this was a deliberate bait it pulled on the humans. They do it the disabled. Hex comes out from the hotel, they'll kill up the crystal maiden, but Open AI Viper, diving past the tower onto us, tops this span, coming back in on minimal HP to throw that stun and secure the kill, as Open AI again getting the favourable trade another tower taken, they are playing at a ferocious speed here in the second game. By the 15 minute mark, the human players lost a barracks and were heavily underfunded and outplayed with seemingly no way to come back. And sure enough, by the 21 minute mark, the game was over. There is no other way to say it, this second game was a one sided beat down. 1-1 was a strategic back and forth where Open AI 5 waited for the right moment to win the game in a big team fight, where here they pressure the human team from the get go and never let them reach the end game where they might have an advantage with their picks. Also have a look at this. Unreal The final result is 2-0 for Open AI. In the post-match interview, No-Tale, one of the human players noted that he is confident that from 5 games they would take at least 1 and after 15 games they would start winning reliably. Very reminiscent of what we heard from players playing against DeepMind's AI in Starcraft 2 and I hope this will be tested. However, in the end he agreed that it is inevitable that this AI will become unbeatable at some point. It was also noted that in 5 vs 5 fights, they seem better in planning than any human team is and there is quite a lot to learn from the AI for us humans. They were also trying to guess the reasoning for all of these early buybacks. According to the players, initially they flat out seemed like misplaced. Perhaps the reason for this instant and not really great buybacks might have been that the AI knows that if the game goes on for much longer, statistically their chances with their given composition to win the game do windows, so it needs to immediately go and win right now whatever the cost. And again, an important lesson is that in this project, open AI is not spending so much money and resources to just play video games. Dota 2 is a wonderful test bed to see how their AI compares to humans at complex tasks that involve strategy and teamwork. However, the ultimate goal is to reuse parts of this system for other complex problems outside of video games. For instance, the algorithm that you've seen here today can also do this. But wait, there's more. Players after these show matches always tend to get these messages from others on Twitter telling them what they did wrong and what they should have done instead. Well, luckily these people were able to show their prowess as open AI gave the chance for anyone in the world to challenge the open AI 5 competitively and play against them online. This way not only team OG, but everyone can get crushed by the AI. How cool is that? This arena event has concluded with over 15,000 games played where open AI 5 had a 99.4% win rate. There are still ways to beat it, but given the rate of progress of this project, likely not for long. Insanity. As always, if you're interested in more details, I put a link to a Reddit AMA in the video description and I also can't wait to pick the algorithm apart for you, but for now we have to wait for the full paper to appear. And note that what happened here is not to be underestimated. Huge respect to the open AI team, to OG for the amazing games and congratulations to the humans who were able to beat these beastly bots online. So there you go. Another long video that's not two minutes and it's not about the paper. Yet. Welcome to two minute papers. If you're doing deep learning, make sure to look into Lambda GPU systems. Lambda offers workstations, servers, laptops, and the GPU cloud for deep learning. You can save up to 90% over AWS, GCP, and Azure GPU instances. Every Lambda GPU system is pre-installed with TensorFlow, PyTorch, and Keras. Just plug it in and start training. Lambda customers include Apple, Microsoft, and Stanford. Go to LambdaLabs.com, slash, papers, or click the link in the description to learn more. Big thanks to Lambda for supporting two minute papers and helping us make better videos. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifehir."}, {"start": 4.64, "end": 7.640000000000001, "text": " This episode has been sponsored by Lambda Labs."}, {"start": 7.640000000000001, "end": 13.34, "text": " Not so long ago, we talked about DeepMind's Alpha Star, an AI that was able to defeat top"}, {"start": 13.34, "end": 18.72, "text": " tier human players in Starcraft 2, a complex real-time strategy game."}, {"start": 18.72, "end": 24.48, "text": " Of course, I love talking about AI's that are developed to challenge pro gamers at a variety"}, {"start": 24.48, "end": 30.54, "text": " of difficult games, so this time around we'll have a look at another major milestone, Open"}, {"start": 30.54, "end": 37.2, "text": " AI 5, which is an AI that plays Dota 2, a multiplayer online battle arena game with a"}, {"start": 37.2, "end": 38.96, "text": " huge cult following."}, {"start": 38.96, "end": 44.44, "text": " As this game requires long-term strategic planning, it is a classic nightmare scenario for any"}, {"start": 44.44, "end": 45.44, "text": " AI."}, {"start": 45.44, "end": 52.72, "text": " But, Open AI is no stranger to Dota 2, in 2017 they showed us an initial version of their"}, {"start": 52.72, "end": 59.879999999999995, "text": " AI that was able to play one versus one games with only one hero and was able to reliably"}, {"start": 59.879999999999995, "end": 63.0, "text": " be Dandy, a world champion player."}, {"start": 63.0, "end": 68.28, "text": " That was quite an achievement, however, of course this was meant to be a stepping stone"}, {"start": 68.28, "end": 71.32, "text": " towards playing the real Dota 2."}, {"start": 71.32, "end": 79.52, "text": " Then in 2018 they unveiled Open AI 5, an improved version of this AI that played 5 vs 5 games"}, {"start": 79.52, "end": 81.4, "text": " with a limited hero pool."}, {"start": 81.4, "end": 86.60000000000001, "text": " This team was able to defeat competent players but was still not quite at the level of a world"}, {"start": 86.60000000000001, "end": 88.44000000000001, "text": " champion human team."}, {"start": 88.44000000000001, "end": 93.44000000000001, "text": " In a 1 hour interview, the Open AI research team mentioned that due to the deadline of"}, {"start": 93.44000000000001, "end": 97.88000000000001, "text": " the international event, they had to make quite a few concessions."}, {"start": 97.88000000000001, "end": 100.68, "text": " And this time several things have changed."}, {"start": 100.68, "end": 107.16000000000001, "text": " First, they didn't just challenge some local team of formidable players, no no, they flat"}, {"start": 107.16, "end": 113.8, "text": " out challenged OG, the reigning world champion team, an ambitious move that exudes confidence"}, {"start": 113.8, "end": 115.28, "text": " from their side."}, {"start": 115.28, "end": 119.92, "text": " Second, this time around, there was no tight deadline as the date of the challenge was"}, {"start": 119.92, "end": 121.84, "text": " chosen by Open AI."}, {"start": 121.84, "end": 128.07999999999998, "text": " Let's quickly talk about the rules of the competition and then see if Open AI's confident"}, {"start": 128.07999999999998, "end": 130.32, "text": " move was justified."}, {"start": 130.32, "end": 134.6, "text": " These learning agents don't look at the pixels of the game and as a result they see"}, {"start": 134.6, "end": 137.35999999999999, "text": " the world as a big bunch of numbers."}, {"start": 137.35999999999999, "end": 143.56, "text": " And this time around, it was able to play a pool of 17 heroes and trained against itself"}, {"start": 143.56, "end": 146.28, "text": " for millions and millions of games."}, {"start": 146.28, "end": 150.95999999999998, "text": " And now, let's have a look at what happened in this best of 3 series."}, {"start": 150.95999999999998, "end": 156.6, "text": " In match 1, right after picking the roster of heroes, the AI estimated its win probability"}, {"start": 156.6, "end": 162.92, "text": " to be 67% so it was quite a surprise that early on, it looked like Open AI's bots were"}, {"start": 162.92, "end": 165.16, "text": " running around aimlessly."}, {"start": 165.16, "end": 168.64, "text": " Over time, we found out that it was not at all the case."}, {"start": 168.64, "end": 175.07999999999998, "text": " It plays unusually aggressively from the gap goal and uses buybacks quite liberally at times"}, {"start": 175.07999999999998, "end": 179.07999999999998, "text": " where human players don't really consider it to be a good choice."}, {"start": 179.07999999999998, "end": 184.16, "text": " These buybacks resurrect a perished hero quickly but in return cost money."}, {"start": 184.16, "end": 187.76, "text": " Later, it became clearer that these bots are no joke."}, {"start": 187.76, "end": 192.72, "text": " They know exactly when to engage and when to back out from an engagement with the smallest"}, {"start": 192.72, "end": 194.4, "text": " sliver of health left."}, {"start": 194.4, "end": 199.2, "text": " I will show quite a few examples of those to you during this video, so stay tuned."}, {"start": 199.2, "end": 205.8, "text": " A little less than 20 minutes in, we had a very even game 1, if anything, Open AI seemed"}, {"start": 205.8, "end": 211.8, "text": " a tiny bit behind and someone noted that we should perhaps ask the bots what they think"}, {"start": 211.8, "end": 213.36, "text": " about their chances."}, {"start": 213.36, "end": 220.24, "text": " And then the AI said, yeah, no worries, we have a higher than 95% chance to win the game."}, {"start": 220.24, "end": 224.28, "text": " This was such a pivotal moment that was very surprising for everyone."}, {"start": 224.28, "end": 229.60000000000002, "text": " Of course, if you call out to win with confidence, you better go all the way and indeed win the"}, {"start": 229.60000000000002, "end": 230.60000000000002, "text": " game."}, {"start": 230.60000000000002, "end": 231.60000000000002, "text": " Right?"}, {"start": 231.60000000000002, "end": 232.60000000000002, "text": " Right."}, {"start": 232.60000000000002, "end": 238.52, "text": " And sure enough, they wiped out almost the entire world champion team of the human players"}, {"start": 238.52, "end": 239.84, "text": " immediately after."}, {"start": 239.84, "end": 241.08, "text": " No tell's pushed all the way back."}, {"start": 241.08, "end": 245.08, "text": " Officials will come out to hold them down but Open AI, they've got two more kills."}, {"start": 245.08, "end": 257.24, "text": " And then noted, you know what?"}, {"start": 257.24, "end": 258.72, "text": " Remember that we just said?"}, {"start": 258.72, "end": 259.72, "text": " Forget that."}, {"start": 259.72, "end": 264.12, "text": " We estimate our chances to win to be above 99% now."}, {"start": 264.12, "end": 267.52000000000004, "text": " And shortly after, they won match number one."}, {"start": 267.52000000000004, "end": 268.76, "text": " Can you believe this?"}, {"start": 268.76, "end": 271.28000000000003, "text": " This is absolutely amazing."}, {"start": 271.28, "end": 277.2, "text": " Interestingly, one of the developers said that the AI is great at assessing whether a fight"}, {"start": 277.2, "end": 278.44, "text": " is worth it."}, {"start": 278.44, "end": 283.44, "text": " As an interesting corollary, if you engage with it and it fights you, it probably means"}, {"start": 283.44, "end": 285.23999999999995, "text": " that you are going to lose."}, {"start": 285.23999999999995, "end": 287.79999999999995, "text": " That must be quite confusing for the players."}, {"start": 287.79999999999995, "end": 289.23999999999995, "text": " Some mind games for you."}, {"start": 289.23999999999995, "end": 290.32, "text": " Love it."}, {"start": 290.32, "end": 296.08, "text": " At the event, it was also such a joy to see such a receptive audience that understood and"}, {"start": 296.08, "end": 299.44, "text": " appreciated high level plays."}, {"start": 299.44, "end": 301.64, "text": " One words to match number two."}, {"start": 301.64, "end": 306.16, "text": " Right after the draft, which is the process of choosing the heroes for each team, the"}, {"start": 306.16, "end": 312.04, "text": " AI predicted a win percentage that was much closer this time around, around 60%."}, {"start": 312.04, "end": 318.24, "text": " In this game, the AI turned up the heat real fast and said just five minutes into the game,"}, {"start": 318.24, "end": 323.8, "text": " which is nothing that it has an over 80% chance to win the game."}, {"start": 323.8, "end": 325.92, "text": " And now, watch this."}, {"start": 325.92, "end": 330.76, "text": " In the game, you can see a great example of where the AI just gets away with a sliver"}, {"start": 330.76, "end": 331.76, "text": " of health."}, {"start": 331.76, "end": 332.76, "text": " Look at this guy."}, {"start": 332.76, "end": 340.76, "text": " They will find the follow-up wrap around Killon towards him, at least they are trying,"}, {"start": 340.76, "end": 343.72, "text": " but with that walk right out, he is able to run away."}, {"start": 343.72, "end": 345.08000000000004, "text": " Another fish comes out."}, {"start": 345.08000000000004, "end": 349.04, "text": " Surely this kill is going to be there, but no, a stun from the span holds back the"}, {"start": 349.04, "end": 351.6, "text": " shaker and he teepes out on 30 HP."}, {"start": 351.6, "end": 357.16, "text": " Open AI gets out of there with the span, they cannot get that kill OG."}, {"start": 357.16, "end": 358.52000000000004, "text": " Look at that."}, {"start": 358.52000000000004, "end": 363.76000000000005, "text": " This is either an accident or some unreal level foresight from the side of this agent."}, {"start": 363.76000000000005, "end": 367.24, "text": " I'd love to hear your opinion on which one you think it is."}, {"start": 367.24, "end": 374.08000000000004, "text": " By the 9.5 minute mark, which is still really early, Open AI 5 said, yes, we got this one"}, {"start": 374.08000000000004, "end": 375.08000000000004, "text": " too."}, {"start": 375.08000000000004, "end": 378.36, "text": " Over 95%."}, {"start": 378.36, "end": 383.72, "text": " Here you see an interesting scenario where the AI loses one hero, but it almost immediately"}, {"start": 383.72, "end": 390.0, "text": " kills two of the human heroes and comes out favorably, at which point we wonder whether this was"}, {"start": 390.0, "end": 392.48, "text": " a deliberate bait it pulled on the humans."}, {"start": 392.48, "end": 393.48, "text": " They do it the disabled."}, {"start": 393.48, "end": 398.28000000000003, "text": " Hex comes out from the hotel, they'll kill up the crystal maiden, but Open AI Viper, diving"}, {"start": 398.28000000000003, "end": 402.52000000000004, "text": " past the tower onto us, tops this span, coming back in on minimal HP to throw that stun"}, {"start": 402.52, "end": 408.76, "text": " and secure the kill, as Open AI again getting the favourable trade another tower taken, they"}, {"start": 408.76, "end": 412.0, "text": " are playing at a ferocious speed here in the second game."}, {"start": 412.0, "end": 416.96, "text": " By the 15 minute mark, the human players lost a barracks and were heavily underfunded"}, {"start": 416.96, "end": 420.68, "text": " and outplayed with seemingly no way to come back."}, {"start": 420.68, "end": 425.08, "text": " And sure enough, by the 21 minute mark, the game was over."}, {"start": 425.08, "end": 430.08, "text": " There is no other way to say it, this second game was a one sided beat down."}, {"start": 430.08, "end": 435.28, "text": " 1-1 was a strategic back and forth where Open AI 5 waited for the right moment to win the"}, {"start": 435.28, "end": 441.15999999999997, "text": " game in a big team fight, where here they pressure the human team from the get go and never"}, {"start": 441.15999999999997, "end": 445.4, "text": " let them reach the end game where they might have an advantage with their picks."}, {"start": 445.4, "end": 463.12, "text": " Also have a look at this."}, {"start": 463.12, "end": 467.96, "text": " Unreal The final result is 2-0 for Open AI."}, {"start": 467.96, "end": 473.2, "text": " In the post-match interview, No-Tale, one of the human players noted that he is confident"}, {"start": 473.2, "end": 479.68, "text": " that from 5 games they would take at least 1 and after 15 games they would start winning"}, {"start": 479.68, "end": 481.08, "text": " reliably."}, {"start": 481.08, "end": 486.0, "text": " Very reminiscent of what we heard from players playing against DeepMind's AI in Starcraft"}, {"start": 486.0, "end": 488.68, "text": " 2 and I hope this will be tested."}, {"start": 488.68, "end": 494.36, "text": " However, in the end he agreed that it is inevitable that this AI will become unbeatable"}, {"start": 494.36, "end": 495.44, "text": " at some point."}, {"start": 495.44, "end": 501.52, "text": " It was also noted that in 5 vs 5 fights, they seem better in planning than any human team"}, {"start": 501.52, "end": 506.08, "text": " is and there is quite a lot to learn from the AI for us humans."}, {"start": 506.08, "end": 510.44, "text": " They were also trying to guess the reasoning for all of these early buybacks."}, {"start": 510.44, "end": 514.92, "text": " According to the players, initially they flat out seemed like misplaced."}, {"start": 514.92, "end": 519.1999999999999, "text": " Perhaps the reason for this instant and not really great buybacks might have been that"}, {"start": 519.1999999999999, "end": 524.96, "text": " the AI knows that if the game goes on for much longer, statistically their chances with"}, {"start": 524.96, "end": 530.1999999999999, "text": " their given composition to win the game do windows, so it needs to immediately go and win"}, {"start": 530.2, "end": 532.9200000000001, "text": " right now whatever the cost."}, {"start": 532.9200000000001, "end": 538.12, "text": " And again, an important lesson is that in this project, open AI is not spending so much"}, {"start": 538.12, "end": 541.24, "text": " money and resources to just play video games."}, {"start": 541.24, "end": 547.96, "text": " Dota 2 is a wonderful test bed to see how their AI compares to humans at complex tasks"}, {"start": 547.96, "end": 550.48, "text": " that involve strategy and teamwork."}, {"start": 550.48, "end": 556.0400000000001, "text": " However, the ultimate goal is to reuse parts of this system for other complex problems"}, {"start": 556.0400000000001, "end": 558.4000000000001, "text": " outside of video games."}, {"start": 558.4, "end": 563.72, "text": " For instance, the algorithm that you've seen here today can also do this."}, {"start": 563.72, "end": 566.0, "text": " But wait, there's more."}, {"start": 566.0, "end": 570.76, "text": " Players after these show matches always tend to get these messages from others on Twitter"}, {"start": 570.76, "end": 574.48, "text": " telling them what they did wrong and what they should have done instead."}, {"start": 574.48, "end": 580.68, "text": " Well, luckily these people were able to show their prowess as open AI gave the chance for"}, {"start": 580.68, "end": 587.0, "text": " anyone in the world to challenge the open AI 5 competitively and play against them online."}, {"start": 587.0, "end": 592.04, "text": " This way not only team OG, but everyone can get crushed by the AI."}, {"start": 592.04, "end": 593.52, "text": " How cool is that?"}, {"start": 593.52, "end": 601.32, "text": " This arena event has concluded with over 15,000 games played where open AI 5 had a 99.4%"}, {"start": 601.32, "end": 602.32, "text": " win rate."}, {"start": 602.32, "end": 607.12, "text": " There are still ways to beat it, but given the rate of progress of this project, likely"}, {"start": 607.12, "end": 608.6, "text": " not for long."}, {"start": 608.6, "end": 609.92, "text": " Insanity."}, {"start": 609.92, "end": 614.8, "text": " As always, if you're interested in more details, I put a link to a Reddit AMA in the video"}, {"start": 614.8, "end": 620.3599999999999, "text": " description and I also can't wait to pick the algorithm apart for you, but for now we"}, {"start": 620.3599999999999, "end": 622.8399999999999, "text": " have to wait for the full paper to appear."}, {"start": 622.8399999999999, "end": 626.88, "text": " And note that what happened here is not to be underestimated."}, {"start": 626.88, "end": 632.3599999999999, "text": " Huge respect to the open AI team, to OG for the amazing games and congratulations to the"}, {"start": 632.3599999999999, "end": 636.4, "text": " humans who were able to beat these beastly bots online."}, {"start": 636.4, "end": 637.4, "text": " So there you go."}, {"start": 637.4, "end": 641.5999999999999, "text": " Another long video that's not two minutes and it's not about the paper."}, {"start": 641.5999999999999, "end": 642.5999999999999, "text": " Yet."}, {"start": 642.5999999999999, "end": 644.52, "text": " Welcome to two minute papers."}, {"start": 644.52, "end": 648.96, "text": " If you're doing deep learning, make sure to look into Lambda GPU systems."}, {"start": 648.96, "end": 655.24, "text": " Lambda offers workstations, servers, laptops, and the GPU cloud for deep learning."}, {"start": 655.24, "end": 662.36, "text": " You can save up to 90% over AWS, GCP, and Azure GPU instances."}, {"start": 662.36, "end": 668.24, "text": " Every Lambda GPU system is pre-installed with TensorFlow, PyTorch, and Keras."}, {"start": 668.24, "end": 670.72, "text": " Just plug it in and start training."}, {"start": 670.72, "end": 674.88, "text": " Lambda customers include Apple, Microsoft, and Stanford."}, {"start": 674.88, "end": 680.72, "text": " Go to LambdaLabs.com, slash, papers, or click the link in the description to learn more."}, {"start": 680.72, "end": 685.8000000000001, "text": " Big thanks to Lambda for supporting two minute papers and helping us make better videos."}, {"start": 685.8, "end": 715.3599999999999, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=e_9f5Z0sMYE
Simulating Grains of Sand, Now 6 Times Faster
📝 The paper "Hybrid Grains: Adaptive Coupling of Discrete and Continuum Simulations of Granular Media" is available here: http://www.cs.columbia.edu/~smith/hybrid_grains/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karoi Jona Ifehir. Good news, another fluid paper is coming up today and this one is about simulating granular materials. Most techniques that can simulate these grains can be classified as either discrete or continuum methods. Discrete methods as the name implies simulate all of these particles one by one. As a result, the amount of detail we can get in our simulations is unmatched, however, we probably are immediately asking the question, doesn't simulating every single grain of sand take forever. Oh yes, yes it does. Indeed the price to be paid for all this amazing detail comes in the form of a large computation time. To work around this limitation, continual methods were invented which do the exact opposite by simulating all of these particles as one block where most of the individual particles within the block behave in a similar manner. This makes the computation times a lot friendlier, however since we are not simulating these grains individually, we lose out on a lot of interesting effects such as clogging, bouncing and ballistic motions. So, in short, a discrete method gives us a proper simulation but takes forever while the continual methods are approximate in nature but execute quicker. And now, from this exposition, the question naturally arises, can we produce a hybrid method that fuses together the advantages of both of these methods. This amazing paper proposes a technique to perform that by subdividing the simulation domain into an inside regime where the continual methods work well and an outside regime where we need to simulate every grain of sand individually with a discrete method. It is not all because the tricky part comes in the form of the reconciliation zone where a partially discrete and partially continuum simulation has to take place. The way to properly simulate this transition zone between the two regimes takes quite a bit of research effort to get right and just think about the fact that we have to track and change these domains over time because, of course, the inside and outside of a block of particles changes rapidly over time. Throughout the video, you will see the continuum zones denoted with red and the discrete zones with blue which are typically on the outside regions. The ratio of these zones gives us an idea of how much speed up we could get compared to a purely discrete simulation. In most cases, it means that 88% fewer discrete particles need to be simulated and this can lead to a total speed up of 6 to 7 times over that simulation. Basically, at least 6 all-nighter simulations running now in one night? I'm in. Sign me up. Also make sure to have a look at the paper because the level of execution of this work is just something else. Check it out in the video description. Beautiful work. My goodness. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karoi Jona Ifehir."}, {"start": 4.5600000000000005, "end": 10.38, "text": " Good news, another fluid paper is coming up today and this one is about simulating granular"}, {"start": 10.38, "end": 11.700000000000001, "text": " materials."}, {"start": 11.700000000000001, "end": 16.34, "text": " Most techniques that can simulate these grains can be classified as either discrete or"}, {"start": 16.34, "end": 17.78, "text": " continuum methods."}, {"start": 17.78, "end": 23.900000000000002, "text": " Discrete methods as the name implies simulate all of these particles one by one."}, {"start": 23.900000000000002, "end": 29.38, "text": " As a result, the amount of detail we can get in our simulations is unmatched, however,"}, {"start": 29.38, "end": 35.24, "text": " we probably are immediately asking the question, doesn't simulating every single grain of sand"}, {"start": 35.24, "end": 36.24, "text": " take forever."}, {"start": 36.24, "end": 39.04, "text": " Oh yes, yes it does."}, {"start": 39.04, "end": 44.92, "text": " Indeed the price to be paid for all this amazing detail comes in the form of a large computation"}, {"start": 44.92, "end": 45.92, "text": " time."}, {"start": 45.92, "end": 51.58, "text": " To work around this limitation, continual methods were invented which do the exact opposite"}, {"start": 51.58, "end": 56.92, "text": " by simulating all of these particles as one block where most of the individual particles"}, {"start": 56.92, "end": 60.160000000000004, "text": " within the block behave in a similar manner."}, {"start": 60.160000000000004, "end": 65.12, "text": " This makes the computation times a lot friendlier, however since we are not simulating these"}, {"start": 65.12, "end": 71.36, "text": " grains individually, we lose out on a lot of interesting effects such as clogging, bouncing"}, {"start": 71.36, "end": 72.96000000000001, "text": " and ballistic motions."}, {"start": 72.96000000000001, "end": 79.12, "text": " So, in short, a discrete method gives us a proper simulation but takes forever while"}, {"start": 79.12, "end": 84.4, "text": " the continual methods are approximate in nature but execute quicker."}, {"start": 84.4, "end": 90.12, "text": " And now, from this exposition, the question naturally arises, can we produce a hybrid"}, {"start": 90.12, "end": 95.28, "text": " method that fuses together the advantages of both of these methods."}, {"start": 95.28, "end": 100.24000000000001, "text": " This amazing paper proposes a technique to perform that by subdividing the simulation"}, {"start": 100.24000000000001, "end": 106.48, "text": " domain into an inside regime where the continual methods work well and an outside regime where"}, {"start": 106.48, "end": 111.12, "text": " we need to simulate every grain of sand individually with a discrete method."}, {"start": 111.12, "end": 116.28, "text": " It is not all because the tricky part comes in the form of the reconciliation zone where"}, {"start": 116.28, "end": 121.16000000000001, "text": " a partially discrete and partially continuum simulation has to take place."}, {"start": 121.16000000000001, "end": 125.64, "text": " The way to properly simulate this transition zone between the two regimes takes quite"}, {"start": 125.64, "end": 130.48000000000002, "text": " a bit of research effort to get right and just think about the fact that we have to track"}, {"start": 130.48000000000002, "end": 136.20000000000002, "text": " and change these domains over time because, of course, the inside and outside of a block"}, {"start": 136.20000000000002, "end": 139.76, "text": " of particles changes rapidly over time."}, {"start": 139.76, "end": 144.12, "text": " Throughout the video, you will see the continuum zones denoted with red and the discrete zones"}, {"start": 144.12, "end": 147.72, "text": " with blue which are typically on the outside regions."}, {"start": 147.72, "end": 152.76, "text": " The ratio of these zones gives us an idea of how much speed up we could get compared to"}, {"start": 152.76, "end": 154.95999999999998, "text": " a purely discrete simulation."}, {"start": 154.95999999999998, "end": 160.56, "text": " In most cases, it means that 88% fewer discrete particles need to be simulated and this"}, {"start": 160.56, "end": 165.79999999999998, "text": " can lead to a total speed up of 6 to 7 times over that simulation."}, {"start": 165.8, "end": 170.36, "text": " Basically, at least 6 all-nighter simulations running now in one night?"}, {"start": 170.36, "end": 171.56, "text": " I'm in."}, {"start": 171.56, "end": 172.56, "text": " Sign me up."}, {"start": 172.56, "end": 176.64000000000001, "text": " Also make sure to have a look at the paper because the level of execution of this work is"}, {"start": 176.64000000000001, "end": 178.16000000000003, "text": " just something else."}, {"start": 178.16000000000003, "end": 180.20000000000002, "text": " Check it out in the video description."}, {"start": 180.20000000000002, "end": 181.68, "text": " Beautiful work."}, {"start": 181.68, "end": 182.84, "text": " My goodness."}, {"start": 182.84, "end": 212.8, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dd1kN_myNDs
AI Learns Tracking People In Videos
📝 The paper "Learning Correspondence from the Cycle-Consistency of Time" is available here: https://arxiv.org/abs/1903.07593 ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. There are many AI techniques that are able to look at a still image and identify objects, textures, human poses and object parts in them really well. However, in the age of the internet, we have videos everywhere. So an important question would be how we could do the same for these animations. One of the key ideas in this paper is that the frames of these videos are not completely independent and they share a lot of information. So, after we make our initial predictions on what is very exactly, these predictions from the previous frame can almost always be reused with a little modification. Not only that, but here you can see with these results that it can also deal with momentary occlusions and is ready to track objects that rotate over time. A key part of this method is that one, it looks back and forth in these videos to update these labels and second, it learns in a self-supervised manner, which means that all it is given is just a little more than data and was never given a nice data set with explicit labels of these regions and object parts that it could learn from. You can see in this comparison table that this is not the only method that works for videos, the paper contains ample comparisons against other methods and comes out ahead of all other unsupervised methods and on this task it can even get quite close to supervised methods. The supervised methods are the ones that have access to these cushy label data sets and therefore should come out way ahead, but they don't, which sounds like witchcraft, considering that this technique is learning on its own. However, all this greatness comes with limitations. One of the bigger ones is that even though it does extremely well, it also plateaus, meaning that we don't see a great deal of improvement if we add more training data. Now whether this is because it is doing nearly as well, as it is humanly or computerly possible, or because a more general problem formulation is still possible remains a question. I hope we find out soon. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.2, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.2, "end": 10.120000000000001, "text": " There are many AI techniques that are able to look at a still image and identify objects,"}, {"start": 10.120000000000001, "end": 14.64, "text": " textures, human poses and object parts in them really well."}, {"start": 14.64, "end": 18.52, "text": " However, in the age of the internet, we have videos everywhere."}, {"start": 18.52, "end": 23.12, "text": " So an important question would be how we could do the same for these animations."}, {"start": 23.12, "end": 28.84, "text": " One of the key ideas in this paper is that the frames of these videos are not completely independent"}, {"start": 28.84, "end": 30.84, "text": " and they share a lot of information."}, {"start": 30.84, "end": 35.24, "text": " So, after we make our initial predictions on what is very exactly,"}, {"start": 35.24, "end": 41.2, "text": " these predictions from the previous frame can almost always be reused with a little modification."}, {"start": 41.2, "end": 47.120000000000005, "text": " Not only that, but here you can see with these results that it can also deal with momentary occlusions"}, {"start": 47.120000000000005, "end": 51.0, "text": " and is ready to track objects that rotate over time."}, {"start": 51.0, "end": 57.519999999999996, "text": " A key part of this method is that one, it looks back and forth in these videos to update these labels"}, {"start": 57.52, "end": 63.2, "text": " and second, it learns in a self-supervised manner, which means that all it is given"}, {"start": 63.2, "end": 67.36, "text": " is just a little more than data and was never given a nice data set"}, {"start": 67.36, "end": 71.88, "text": " with explicit labels of these regions and object parts that it could learn from."}, {"start": 71.88, "end": 77.2, "text": " You can see in this comparison table that this is not the only method that works for videos,"}, {"start": 77.2, "end": 81.0, "text": " the paper contains ample comparisons against other methods"}, {"start": 81.0, "end": 85.0, "text": " and comes out ahead of all other unsupervised methods"}, {"start": 85.0, "end": 89.72, "text": " and on this task it can even get quite close to supervised methods."}, {"start": 89.72, "end": 94.72, "text": " The supervised methods are the ones that have access to these cushy label data sets"}, {"start": 94.72, "end": 99.92, "text": " and therefore should come out way ahead, but they don't, which sounds like witchcraft,"}, {"start": 99.92, "end": 103.32, "text": " considering that this technique is learning on its own."}, {"start": 103.32, "end": 107.16, "text": " However, all this greatness comes with limitations."}, {"start": 107.16, "end": 110.92, "text": " One of the bigger ones is that even though it does extremely well,"}, {"start": 110.92, "end": 117.44, "text": " it also plateaus, meaning that we don't see a great deal of improvement if we add more training data."}, {"start": 117.44, "end": 121.84, "text": " Now whether this is because it is doing nearly as well, as it is humanly"}, {"start": 121.84, "end": 128.88, "text": " or computerly possible, or because a more general problem formulation is still possible remains a question."}, {"start": 128.88, "end": 130.72, "text": " I hope we find out soon."}, {"start": 130.72, "end": 141.32, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=mGHKFMXdjKU
DeepMind's AI Learned a Better Understanding of 3D Scenes
Backblaze: https://www.backblaze.com/cloud-backup.html#af9tk4 📝 The paper "MONet: Unsupervised Scene Decomposition and Representation" is available here: https://arxiv.org/abs/1901.11390 ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Background image credit: https://pixabay.com/hu/photos/világ%C3%ADtótorony-magyarország-2542726/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute pepper sweet caro e johnaifahir. This paper was written by scientists at DeepMind and it is about teaching an AI to look at a 3D scene and decompose it into its individual elements in a meaningful manner. This is typically one of those tasks that is easy to do for humans and is immensely difficult for machines. As this decomposition thing still sounds a little nebulous, let me explain what it means. Here you see an example scene and the segmentation of this scene that the AI came up with, which shows what it thinks where the boundaries of the individual objects are. However, we are not stopping there because it is also able to rip out these objects from the scene one by one. So why is this such a big deal? Well, because of three things. One, it is a generative model, meaning that it is able to reorganize these scenes and create new content that actually makes sense. Two, it can prove that it truly has an understanding of 3D scenes by demonstrating that it can deal with occlusions. For instance, if we ask it to rip out the blue cylinder from this scene, it is able to reconstruct parts of it that weren't even visible in the original scene. Same with the blue sphere here. Amazing, isn't it? And three, this one is a bombshell, it is an unsupervised learning technique. Now, our more seasoned fellow scholars fell out of the chair hearing this, but just in case, this means that this algorithm is able to learn on its own and we have to feed it a ton of training data, but this training data is not labeled. It just looks at the videos with no additional information and from watching all this content, it finds out on its own about the concept of these individual objects. The main motivation to create such an algorithm was to have an AI look at some gameplay of the StarCraft II strategy game and be able to recognize all individual units and the background without any additional supervision. I really hope this also means that DeepMind is working on a version of their StarCraft II AI that is able to learn more similarly to how a human does, which is looking at the pixels of the game. If you look at the details, this will seem almost unfathomably difficult, but would, of course, make me unreasonably happy. What a time to be alive. If you check out the paper in the video description, you will find how all this is possible through a creative combination of an attention network and a variational autoencoder. This episode has been supported by Backblaze. Backblaze is an unlimited online backup solution for only six dollars a month and I have been using it for years to make sure my personal data, family pictures and the materials required to create this series are safe. You can try it free of charge for 15 days and if you don't like it, you can immediately cancel it without losing anything. Make sure to sign up for Backblaze today through the link in the video description and this way you not only keep your personal data safe, but you also help supporting this series. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.36, "text": " Dear Fellow Scholars, this is two-minute pepper sweet caro e johnaifahir."}, {"start": 4.36, "end": 10.44, "text": " This paper was written by scientists at DeepMind and it is about teaching an AI to look at a 3D"}, {"start": 10.44, "end": 15.72, "text": " scene and decompose it into its individual elements in a meaningful manner."}, {"start": 15.72, "end": 20.88, "text": " This is typically one of those tasks that is easy to do for humans and is immensely difficult"}, {"start": 20.88, "end": 22.32, "text": " for machines."}, {"start": 22.32, "end": 27.8, "text": " As this decomposition thing still sounds a little nebulous, let me explain what it means."}, {"start": 27.8, "end": 33.4, "text": " Here you see an example scene and the segmentation of this scene that the AI came up with, which"}, {"start": 33.4, "end": 38.480000000000004, "text": " shows what it thinks where the boundaries of the individual objects are."}, {"start": 38.480000000000004, "end": 43.96, "text": " However, we are not stopping there because it is also able to rip out these objects from"}, {"start": 43.96, "end": 46.32, "text": " the scene one by one."}, {"start": 46.32, "end": 48.8, "text": " So why is this such a big deal?"}, {"start": 48.8, "end": 51.0, "text": " Well, because of three things."}, {"start": 51.0, "end": 57.24, "text": " One, it is a generative model, meaning that it is able to reorganize these scenes and create"}, {"start": 57.24, "end": 60.32, "text": " new content that actually makes sense."}, {"start": 60.32, "end": 66.24000000000001, "text": " Two, it can prove that it truly has an understanding of 3D scenes by demonstrating that it can"}, {"start": 66.24000000000001, "end": 68.16, "text": " deal with occlusions."}, {"start": 68.16, "end": 73.48, "text": " For instance, if we ask it to rip out the blue cylinder from this scene, it is able to"}, {"start": 73.48, "end": 80.2, "text": " reconstruct parts of it that weren't even visible in the original scene."}, {"start": 80.2, "end": 82.64, "text": " Same with the blue sphere here."}, {"start": 82.64, "end": 83.96000000000001, "text": " Amazing, isn't it?"}, {"start": 83.96, "end": 89.72, "text": " And three, this one is a bombshell, it is an unsupervised learning technique."}, {"start": 89.72, "end": 94.67999999999999, "text": " Now, our more seasoned fellow scholars fell out of the chair hearing this, but just in"}, {"start": 94.67999999999999, "end": 99.6, "text": " case, this means that this algorithm is able to learn on its own and we have to feed"}, {"start": 99.6, "end": 104.75999999999999, "text": " it a ton of training data, but this training data is not labeled."}, {"start": 104.75999999999999, "end": 110.28, "text": " It just looks at the videos with no additional information and from watching all this content,"}, {"start": 110.28, "end": 115.04, "text": " it finds out on its own about the concept of these individual objects."}, {"start": 115.04, "end": 120.4, "text": " The main motivation to create such an algorithm was to have an AI look at some gameplay of"}, {"start": 120.4, "end": 126.48, "text": " the StarCraft II strategy game and be able to recognize all individual units and the background"}, {"start": 126.48, "end": 128.52, "text": " without any additional supervision."}, {"start": 128.52, "end": 133.88, "text": " I really hope this also means that DeepMind is working on a version of their StarCraft II"}, {"start": 133.88, "end": 139.68, "text": " AI that is able to learn more similarly to how a human does, which is looking at the"}, {"start": 139.68, "end": 141.36, "text": " pixels of the game."}, {"start": 141.36, "end": 147.20000000000002, "text": " If you look at the details, this will seem almost unfathomably difficult, but would, of course,"}, {"start": 147.20000000000002, "end": 149.36, "text": " make me unreasonably happy."}, {"start": 149.36, "end": 151.04000000000002, "text": " What a time to be alive."}, {"start": 151.04000000000002, "end": 155.8, "text": " If you check out the paper in the video description, you will find how all this is possible through"}, {"start": 155.8, "end": 161.20000000000002, "text": " a creative combination of an attention network and a variational autoencoder."}, {"start": 161.20000000000002, "end": 164.08, "text": " This episode has been supported by Backblaze."}, {"start": 164.08, "end": 169.36, "text": " Backblaze is an unlimited online backup solution for only six dollars a month and I have been"}, {"start": 169.36, "end": 175.48000000000002, "text": " using it for years to make sure my personal data, family pictures and the materials required"}, {"start": 175.48000000000002, "end": 177.8, "text": " to create this series are safe."}, {"start": 177.8, "end": 182.84, "text": " You can try it free of charge for 15 days and if you don't like it, you can immediately"}, {"start": 182.84, "end": 185.08, "text": " cancel it without losing anything."}, {"start": 185.08, "end": 189.28000000000003, "text": " Make sure to sign up for Backblaze today through the link in the video description and this"}, {"start": 189.28000000000003, "end": 194.64000000000001, "text": " way you not only keep your personal data safe, but you also help supporting this series."}, {"start": 194.64, "end": 198.55999999999997, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Wxb0jN0X7cs
How To Train Your Virtual Dragon
Patreon: https://www.patreon.com/TwoMinutePapers ₿ Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 1a5ttKiVQiDcr9j8JT2DoHGzLG7XTJccX › Ethereum: 0xbBD767C0e14be1886c6610bf3F592A91D866d380 › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg 📝 The paper "Aerobatics Control of Flying Creatures via Self-Regulated Learning" is available here: http://mrl.snu.ac.kr/research/ProjectAerobatics/Aerobatics.htm 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Jolyna Ifehir. Scientists at the Seoul National University in South Korea wrote a great paper on teaching an imaginary dragon all kinds of really cool aerobatic maneuvers like sharp turning, rapid winding, rolling, soaring, and diving. This is all done by a reinforcement learning variant where the problem formulation is that the AI has to continuously choose a character's actions to maximize a reward. Here, this reward function is related to a trajectory which we can draw in advance. These are the lines that the dragon seems to follow quite well. However, what you see here is the finished product. Curious to see how the dragon falters as it learns to maneuver properly? Well, we are in luck. Buckle up. You see the ideal trajectory here with black, and initially the dragon was too clumsy to navigate in a way that even resembles this path. Then, later, it learned to start the first turn properly, but as you see here, it was unable to avoid the obstacle and likely needs to fly to the emergency room. But it would probably miss that building too, of course. After more learning, it was able to finish the first loop but was still too inaccurate to perform the second. And finally, at last, it became adept at performing this difficult maneuver. A plus. One of the main difficulties of this problem is the fact that the dragon is always in motion and has a lot of momentum. And anything we do always has an effect later and we not only have to find one good action but whole sequences of actions that will lead us to victory. This is quite difficult. So how do we do that? To accomplish this, this work not only uses a reinforcement learning variant, but also adds something called self-regulated learning to it where we don't present the AI with a fixed curriculum, but we put the learner in charge of its own learning. This also means that it is able to take a big, complex goal and subdivide it into new, smaller goals. In this case, the big goal is following the trajectory with some more additional constraints which, by itself, turned out to be too difficult to learn with these traditional techniques. Instead, the agent realizes that if it tracks its own progress on a set of separate but smaller sub-goals, such as tracking its own orientation, positions, and rotations so guess the desired target states separately, it can finally learn to perform these amazing stunts. That sounds great, but how is this done exactly? This is done through a series of three steps where step one is generation, where the learner creates a few alternative solutions for itself and proceeds to the second step, evaluation where it has to judge these individual alternatives and find the best ones. And third, learning, which means looking back and recording whether these judgments, indeed, put the learner in a better position. By iterating these three steps, this virtual dragon, learn to fly properly. Isn't this amazing? I mentioned earlier that this kind of problem formulation is intractable without self-regulated learning and you can see here how a previous work fares on following these trajectories. There is indeed a world of a difference between the two. So there you go, in case you enter a virtual world where you need to train your own dragon, you'll know what to do. But just in case, also read the paper in the video description. If you enjoyed this episode and you wish to watch our other videos in early access or get your name immortalized in the video description, please consider supporting us on Patreon through patreon.com slash two minute papers. The link is available in the video description and this way we can make better videos for you. We also support crypto currencies, the addresses are also available in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 5.04, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Jolyna Ifehir."}, {"start": 5.04, "end": 11.120000000000001, "text": " Scientists at the Seoul National University in South Korea wrote a great paper on teaching"}, {"start": 11.120000000000001, "end": 17.8, "text": " an imaginary dragon all kinds of really cool aerobatic maneuvers like sharp turning,"}, {"start": 17.8, "end": 21.84, "text": " rapid winding, rolling, soaring, and diving."}, {"start": 21.84, "end": 26.68, "text": " This is all done by a reinforcement learning variant where the problem formulation is that"}, {"start": 26.68, "end": 32.36, "text": " the AI has to continuously choose a character's actions to maximize a reward."}, {"start": 32.36, "end": 38.12, "text": " Here, this reward function is related to a trajectory which we can draw in advance."}, {"start": 38.12, "end": 41.64, "text": " These are the lines that the dragon seems to follow quite well."}, {"start": 41.64, "end": 45.28, "text": " However, what you see here is the finished product."}, {"start": 45.28, "end": 49.64, "text": " Curious to see how the dragon falters as it learns to maneuver properly?"}, {"start": 49.64, "end": 51.44, "text": " Well, we are in luck."}, {"start": 51.44, "end": 52.44, "text": " Buckle up."}, {"start": 52.44, "end": 57.96, "text": " You see the ideal trajectory here with black, and initially the dragon was too clumsy to"}, {"start": 57.96, "end": 61.72, "text": " navigate in a way that even resembles this path."}, {"start": 61.72, "end": 67.67999999999999, "text": " Then, later, it learned to start the first turn properly, but as you see here, it was"}, {"start": 67.67999999999999, "end": 72.92, "text": " unable to avoid the obstacle and likely needs to fly to the emergency room."}, {"start": 72.92, "end": 76.12, "text": " But it would probably miss that building too, of course."}, {"start": 76.12, "end": 80.92, "text": " After more learning, it was able to finish the first loop but was still too inaccurate"}, {"start": 80.92, "end": 84.36, "text": " to perform the second."}, {"start": 84.36, "end": 90.76, "text": " And finally, at last, it became adept at performing this difficult maneuver."}, {"start": 90.76, "end": 94.6, "text": " A plus."}, {"start": 94.6, "end": 99.6, "text": " One of the main difficulties of this problem is the fact that the dragon is always in motion"}, {"start": 99.6, "end": 101.44, "text": " and has a lot of momentum."}, {"start": 101.44, "end": 107.0, "text": " And anything we do always has an effect later and we not only have to find one good action"}, {"start": 107.0, "end": 111.0, "text": " but whole sequences of actions that will lead us to victory."}, {"start": 111.0, "end": 112.56, "text": " This is quite difficult."}, {"start": 112.56, "end": 114.52, "text": " So how do we do that?"}, {"start": 114.52, "end": 119.52, "text": " To accomplish this, this work not only uses a reinforcement learning variant, but also"}, {"start": 119.52, "end": 125.12, "text": " adds something called self-regulated learning to it where we don't present the AI with a fixed"}, {"start": 125.12, "end": 129.68, "text": " curriculum, but we put the learner in charge of its own learning."}, {"start": 129.68, "end": 136.24, "text": " This also means that it is able to take a big, complex goal and subdivide it into new,"}, {"start": 136.24, "end": 137.60000000000002, "text": " smaller goals."}, {"start": 137.60000000000002, "end": 142.68, "text": " In this case, the big goal is following the trajectory with some more additional constraints"}, {"start": 142.68, "end": 148.36, "text": " which, by itself, turned out to be too difficult to learn with these traditional techniques."}, {"start": 148.36, "end": 154.52, "text": " Instead, the agent realizes that if it tracks its own progress on a set of separate but"}, {"start": 154.52, "end": 160.32000000000002, "text": " smaller sub-goals, such as tracking its own orientation, positions, and rotations"}, {"start": 160.32, "end": 166.44, "text": " so guess the desired target states separately, it can finally learn to perform these amazing"}, {"start": 166.44, "end": 167.6, "text": " stunts."}, {"start": 167.6, "end": 170.92, "text": " That sounds great, but how is this done exactly?"}, {"start": 170.92, "end": 176.28, "text": " This is done through a series of three steps where step one is generation, where the learner"}, {"start": 176.28, "end": 182.6, "text": " creates a few alternative solutions for itself and proceeds to the second step, evaluation"}, {"start": 182.6, "end": 187.64, "text": " where it has to judge these individual alternatives and find the best ones."}, {"start": 187.64, "end": 193.39999999999998, "text": " And third, learning, which means looking back and recording whether these judgments, indeed,"}, {"start": 193.39999999999998, "end": 195.76, "text": " put the learner in a better position."}, {"start": 195.76, "end": 200.76, "text": " By iterating these three steps, this virtual dragon, learn to fly properly."}, {"start": 200.76, "end": 202.67999999999998, "text": " Isn't this amazing?"}, {"start": 202.67999999999998, "end": 207.67999999999998, "text": " I mentioned earlier that this kind of problem formulation is intractable without self-regulated"}, {"start": 207.67999999999998, "end": 213.0, "text": " learning and you can see here how a previous work fares on following these trajectories."}, {"start": 213.0, "end": 216.88, "text": " There is indeed a world of a difference between the two."}, {"start": 216.88, "end": 222.51999999999998, "text": " So there you go, in case you enter a virtual world where you need to train your own dragon,"}, {"start": 222.51999999999998, "end": 223.84, "text": " you'll know what to do."}, {"start": 223.84, "end": 227.4, "text": " But just in case, also read the paper in the video description."}, {"start": 227.4, "end": 232.32, "text": " If you enjoyed this episode and you wish to watch our other videos in early access or"}, {"start": 232.32, "end": 237.68, "text": " get your name immortalized in the video description, please consider supporting us on Patreon"}, {"start": 237.68, "end": 241.28, "text": " through patreon.com slash two minute papers."}, {"start": 241.28, "end": 246.32, "text": " The link is available in the video description and this way we can make better videos for you."}, {"start": 246.32, "end": 251.4, "text": " We also support crypto currencies, the addresses are also available in the video description."}, {"start": 251.4, "end": 279.4, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=XSWqLb0VyzM
Exploring And Attacking Neural Networks With Activation Atlases
📝 The paper "Exploring Neural Networks with Activation Atlases" is available here: https://distill.pub/2019/activation-atlas/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Jolene Fahir, when it comes to image classification tasks in which the input is a photograph and the output is a decision as to what is depicted in this photo, neural network-based learning solutions became more accurate than any other computer program we humans could possibly write by hand. Because of that, the question naturally arises. What do these neural networks really do inside to make this happen? This article explores new ways to visualize the inner workings of these networks and since it was published in the distale journal, you can expect beautiful and interactive visualizations that you can also play with if you have a look in the video description. It is so good, I really hope that more modern journals like this appear in the near future. But back to our topic, wait a second, we already had several videos on neural network visualization before, so what is new here? Well, let's see. First, we have looked at visualizations for individual neurons. This can be done by starting from a noisy image and add slight modifications to it in a way that makes a chosen neuron extremely excited. This results in these beautiful colored patterns. I absolutely love, love, love these patterns, however, dismisses all the potential interactions between the neurons of which there are quite many. With this, we have arrived to pairwise neuron activations which adds more light on how these neurons work together. Another one of those beautiful patterns. This is, of course, somewhat more informative. Intuitively, if visualizing individual neurons was equivalent to looking at a sadly little line, the pairwise interactions would be observing two dislices in a space. However, we are still not seeing too much from this space of activations, and the even bigger issue is that this space is not our ordinary 3D space, but a high dimensional one. Visualizing spatial activations gives us more information about these interactions between not two, but more neurons, which brings us closer to a full-blown visualization. However, this new activation Atlas technique is able to provide us with even more extra knowledge. How? Well, you see here with the dots that it provides us a denser sampling of the most likely activations, and this leads to a more complete, bigger picture view of the inner workings of the neural network. This is what it looks like if we run it on one image. It also provides us with way more extra value, because so far we have only seen how the neural network reacts to one image, but this method can be extended to see its reaction to not one, but one million images. You can see an example of that here. What's more, it can also unveil weaknesses in the neural network. Have a look at this amazing example where the visualization uncovers that we can make this neural network misclassify a gray whale for a gray-twight shark, and all we need to do is just brazenly put a baseball in this image. It is not a beautiful montage, is it? Well, that's not a drawback, that's exactly the point. No finesse is required, and the network is still fooled by this poorly edited adversarial image. We can also trace paths in this Atlas, which reveal how the neural network decides whether one or multiple people are in an image, or how to tell a watery type terrain from a rocky cliff. Again, we have only scratched the surface here, and you can play with these visualizations yourself, so make sure to have a closer look at the paper through the link in the video description. You won't regret it. Let me know in the comments section how it went. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.68, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Jolene Fahir, when it comes to image"}, {"start": 5.68, "end": 11.72, "text": " classification tasks in which the input is a photograph and the output is a decision"}, {"start": 11.72, "end": 17.68, "text": " as to what is depicted in this photo, neural network-based learning solutions became more accurate"}, {"start": 17.68, "end": 23.16, "text": " than any other computer program we humans could possibly write by hand."}, {"start": 23.16, "end": 26.400000000000002, "text": " Because of that, the question naturally arises."}, {"start": 26.4, "end": 30.599999999999998, "text": " What do these neural networks really do inside to make this happen?"}, {"start": 30.599999999999998, "end": 36.2, "text": " This article explores new ways to visualize the inner workings of these networks and since"}, {"start": 36.2, "end": 42.04, "text": " it was published in the distale journal, you can expect beautiful and interactive visualizations"}, {"start": 42.04, "end": 46.0, "text": " that you can also play with if you have a look in the video description."}, {"start": 46.0, "end": 52.16, "text": " It is so good, I really hope that more modern journals like this appear in the near future."}, {"start": 52.16, "end": 58.559999999999995, "text": " But back to our topic, wait a second, we already had several videos on neural network visualization"}, {"start": 58.559999999999995, "end": 61.519999999999996, "text": " before, so what is new here?"}, {"start": 61.519999999999996, "end": 63.31999999999999, "text": " Well, let's see."}, {"start": 63.31999999999999, "end": 67.8, "text": " First, we have looked at visualizations for individual neurons."}, {"start": 67.8, "end": 73.6, "text": " This can be done by starting from a noisy image and add slight modifications to it in a way"}, {"start": 73.6, "end": 77.4, "text": " that makes a chosen neuron extremely excited."}, {"start": 77.4, "end": 80.4, "text": " This results in these beautiful colored patterns."}, {"start": 80.4, "end": 87.16000000000001, "text": " I absolutely love, love, love these patterns, however, dismisses all the potential interactions"}, {"start": 87.16000000000001, "end": 91.08000000000001, "text": " between the neurons of which there are quite many."}, {"start": 91.08000000000001, "end": 96.64000000000001, "text": " With this, we have arrived to pairwise neuron activations which adds more light on how"}, {"start": 96.64000000000001, "end": 99.24000000000001, "text": " these neurons work together."}, {"start": 99.24000000000001, "end": 101.68, "text": " Another one of those beautiful patterns."}, {"start": 101.68, "end": 104.88000000000001, "text": " This is, of course, somewhat more informative."}, {"start": 104.88000000000001, "end": 110.36000000000001, "text": " Intuitively, if visualizing individual neurons was equivalent to looking at a sadly"}, {"start": 110.36, "end": 116.08, "text": " little line, the pairwise interactions would be observing two dislices in a space."}, {"start": 116.08, "end": 120.92, "text": " However, we are still not seeing too much from this space of activations, and the even"}, {"start": 120.92, "end": 128.16, "text": " bigger issue is that this space is not our ordinary 3D space, but a high dimensional one."}, {"start": 128.16, "end": 132.64, "text": " Visualizing spatial activations gives us more information about these interactions between"}, {"start": 132.64, "end": 138.52, "text": " not two, but more neurons, which brings us closer to a full-blown visualization."}, {"start": 138.52, "end": 145.96, "text": " However, this new activation Atlas technique is able to provide us with even more extra knowledge."}, {"start": 145.96, "end": 146.96, "text": " How?"}, {"start": 146.96, "end": 152.20000000000002, "text": " Well, you see here with the dots that it provides us a denser sampling of the most likely"}, {"start": 152.20000000000002, "end": 158.32000000000002, "text": " activations, and this leads to a more complete, bigger picture view of the inner workings"}, {"start": 158.32000000000002, "end": 160.12, "text": " of the neural network."}, {"start": 160.12, "end": 162.96, "text": " This is what it looks like if we run it on one image."}, {"start": 162.96, "end": 168.76000000000002, "text": " It also provides us with way more extra value, because so far we have only seen how the"}, {"start": 168.76000000000002, "end": 174.12, "text": " neural network reacts to one image, but this method can be extended to see its reaction"}, {"start": 174.12, "end": 177.52, "text": " to not one, but one million images."}, {"start": 177.52, "end": 180.32, "text": " You can see an example of that here."}, {"start": 180.32, "end": 185.24, "text": " What's more, it can also unveil weaknesses in the neural network."}, {"start": 185.24, "end": 190.08, "text": " Have a look at this amazing example where the visualization uncovers that we can make"}, {"start": 190.08, "end": 196.24, "text": " this neural network misclassify a gray whale for a gray-twight shark, and all we need"}, {"start": 196.24, "end": 200.52, "text": " to do is just brazenly put a baseball in this image."}, {"start": 200.52, "end": 202.88000000000002, "text": " It is not a beautiful montage, is it?"}, {"start": 202.88000000000002, "end": 206.88000000000002, "text": " Well, that's not a drawback, that's exactly the point."}, {"start": 206.88000000000002, "end": 212.4, "text": " No finesse is required, and the network is still fooled by this poorly edited adversarial"}, {"start": 212.4, "end": 214.24, "text": " image."}, {"start": 214.24, "end": 219.48000000000002, "text": " We can also trace paths in this Atlas, which reveal how the neural network decides whether"}, {"start": 219.48, "end": 225.76, "text": " one or multiple people are in an image, or how to tell a watery type terrain from a rocky"}, {"start": 225.76, "end": 226.76, "text": " cliff."}, {"start": 226.76, "end": 231.28, "text": " Again, we have only scratched the surface here, and you can play with these visualizations"}, {"start": 231.28, "end": 235.83999999999997, "text": " yourself, so make sure to have a closer look at the paper through the link in the video"}, {"start": 235.83999999999997, "end": 236.83999999999997, "text": " description."}, {"start": 236.83999999999997, "end": 238.23999999999998, "text": " You won't regret it."}, {"start": 238.23999999999998, "end": 240.39999999999998, "text": " Let me know in the comments section how it went."}, {"start": 240.4, "end": 249.6, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=hW1_Sidq3m8
NVIDIA's AI Creates Beautiful Images From Your Sketches! ✏️
If you feel like it, buy anything through this Amazon link - you don't lose anything and we get a small kickback. US: https://amzn.to/2FQHPcs EU: https://amzn.to/2UnB2yF 📝 The paper "Semantic Image Synthesis with Spatially-Adaptive Normalization" and its source code is available here: https://nvlabs.github.io/SPADE/ https://github.com/NVlabs/SPADE ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir. I know for a fact that some of you remember our first video on image translation, which was approximately three years and 250 episodes ago. This was a technique where we took an input painting and the labeling of this image that shows what kind of objects are depicted and then we could start editing this labeling and out came a pretty neat image that satisfies these labels. One came, picks to picks, another image translation technique which in some cases only required a labeling, the source photo was not required because these features were learned from a large amount of training samples. And it could perform really cool things like translating a landscape into a map or sketches to photos and more. Both of these works were absolutely amazing and I always say two more papers down the line and we are going to have much higher resolution images. So this time here is the paper that is in fact two more papers down the line. So let's see what it can do. I advise you that you hold on to your papers for this one. The input is again a labeling which we can draw ourselves and the output is a hopefully photorealistic image that adheres to these labels. I like how first only the silhouette of the rock is drawn so we have this hollow thing on the right that is not very realistic and then it is now filled in with the bucket tool and there you go. It looks amazing. It synthesizes a relatively high resolution image and we finally have some detail in there too. But of course there are many possible images that correspond to this input labeling. How do we control the algorithm to follow our artistic goals? Well, you remember from the first work I've shown you where we could do that by adding an additional image as an input style. Well, look at that. We don't even need to engage in that because here we can choose from a set of input styles that are built into the algorithm and we can switch between them almost immediately. I think the results speak for themselves but note that not only the visual fidelity but the alignment with the input labels is also superior to previous approaches. Of course to perform this we need a large amount of training data where the inputs are labels and the outputs are the photorealistic images. So how do we generate such a dataset? Drawing a bunch of labels and asking artists to fill them in sounds like a crude and expensive idea. Well, of course we can do it for free by thinking the other way around. Let's take a set of photorealistic images and use already existing algorithms to create a labeling for them. If we can do that, we'll have as many training samples, as many images we have, in other words more than enough to train an amazing neural network. Also, the main part of the magic in this new work is using a new kind of layer for normalizing information within this neural network that adapts better to our input data than the previously used batch normalization layers. This is what makes the outputs more crisp and does not let semantic information be washed away in these images. If you have a closer look at the paper in the video description, you will also find a nice evaluation section with plenty of comparisons to previous algorithms and according to the authors, the source code will be released soon as well. As soon as it comes out, everyone will be able to dream up beautiful photorealistic images and get them out almost instantly. Another time to be alive. If you have enjoyed this episode and would like to support us, please click one of the Amazon affiliate links in the video description and buy something that you are looking to buy on Amazon anyway. You don't lose anything, and this way we get a small kickback which is a great way to support the series so we can make better videos for you. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Ejona Ifehir."}, {"start": 4.32, "end": 10.0, "text": " I know for a fact that some of you remember our first video on image translation, which"}, {"start": 10.0, "end": 14.32, "text": " was approximately three years and 250 episodes ago."}, {"start": 14.32, "end": 18.8, "text": " This was a technique where we took an input painting and the labeling of this image that"}, {"start": 18.8, "end": 24.82, "text": " shows what kind of objects are depicted and then we could start editing this labeling"}, {"start": 24.82, "end": 29.72, "text": " and out came a pretty neat image that satisfies these labels."}, {"start": 29.72, "end": 35.519999999999996, "text": " One came, picks to picks, another image translation technique which in some cases only required"}, {"start": 35.519999999999996, "end": 40.72, "text": " a labeling, the source photo was not required because these features were learned from"}, {"start": 40.72, "end": 42.96, "text": " a large amount of training samples."}, {"start": 42.96, "end": 48.7, "text": " And it could perform really cool things like translating a landscape into a map or sketches"}, {"start": 48.7, "end": 51.2, "text": " to photos and more."}, {"start": 51.2, "end": 56.8, "text": " Both of these works were absolutely amazing and I always say two more papers down the line"}, {"start": 56.8, "end": 60.239999999999995, "text": " and we are going to have much higher resolution images."}, {"start": 60.239999999999995, "end": 65.96, "text": " So this time here is the paper that is in fact two more papers down the line."}, {"start": 65.96, "end": 67.88, "text": " So let's see what it can do."}, {"start": 67.88, "end": 71.16, "text": " I advise you that you hold on to your papers for this one."}, {"start": 71.16, "end": 76.67999999999999, "text": " The input is again a labeling which we can draw ourselves and the output is a hopefully"}, {"start": 76.67999999999999, "end": 80.47999999999999, "text": " photorealistic image that adheres to these labels."}, {"start": 80.47999999999999, "end": 85.47999999999999, "text": " I like how first only the silhouette of the rock is drawn so we have this hollow thing"}, {"start": 85.48, "end": 90.76, "text": " on the right that is not very realistic and then it is now filled in with the bucket"}, {"start": 90.76, "end": 92.92, "text": " tool and there you go."}, {"start": 92.92, "end": 94.96000000000001, "text": " It looks amazing."}, {"start": 94.96000000000001, "end": 99.72, "text": " It synthesizes a relatively high resolution image and we finally have some detail in"}, {"start": 99.72, "end": 100.88000000000001, "text": " there too."}, {"start": 100.88000000000001, "end": 106.2, "text": " But of course there are many possible images that correspond to this input labeling."}, {"start": 106.2, "end": 110.16, "text": " How do we control the algorithm to follow our artistic goals?"}, {"start": 110.16, "end": 115.32000000000001, "text": " Well, you remember from the first work I've shown you where we could do that by adding"}, {"start": 115.32, "end": 118.16, "text": " an additional image as an input style."}, {"start": 118.16, "end": 120.39999999999999, "text": " Well, look at that."}, {"start": 120.39999999999999, "end": 125.63999999999999, "text": " We don't even need to engage in that because here we can choose from a set of input styles"}, {"start": 125.63999999999999, "end": 131.44, "text": " that are built into the algorithm and we can switch between them almost immediately."}, {"start": 131.44, "end": 137.12, "text": " I think the results speak for themselves but note that not only the visual fidelity but"}, {"start": 137.12, "end": 142.24, "text": " the alignment with the input labels is also superior to previous approaches."}, {"start": 142.24, "end": 148.04000000000002, "text": " Of course to perform this we need a large amount of training data where the inputs are labels"}, {"start": 148.04000000000002, "end": 151.16, "text": " and the outputs are the photorealistic images."}, {"start": 151.16, "end": 153.88, "text": " So how do we generate such a dataset?"}, {"start": 153.88, "end": 159.64000000000001, "text": " Drawing a bunch of labels and asking artists to fill them in sounds like a crude and expensive"}, {"start": 159.64000000000001, "end": 160.64000000000001, "text": " idea."}, {"start": 160.64000000000001, "end": 165.96, "text": " Well, of course we can do it for free by thinking the other way around."}, {"start": 165.96, "end": 171.24, "text": " Let's take a set of photorealistic images and use already existing algorithms to create"}, {"start": 171.24, "end": 172.96, "text": " a labeling for them."}, {"start": 172.96, "end": 178.44, "text": " If we can do that, we'll have as many training samples, as many images we have, in other"}, {"start": 178.44, "end": 182.36, "text": " words more than enough to train an amazing neural network."}, {"start": 182.36, "end": 188.64000000000001, "text": " Also, the main part of the magic in this new work is using a new kind of layer for normalizing"}, {"start": 188.64000000000001, "end": 193.88, "text": " information within this neural network that adapts better to our input data than the"}, {"start": 193.88, "end": 196.88, "text": " previously used batch normalization layers."}, {"start": 196.88, "end": 201.51999999999998, "text": " This is what makes the outputs more crisp and does not let semantic information be washed"}, {"start": 201.51999999999998, "end": 203.24, "text": " away in these images."}, {"start": 203.24, "end": 207.24, "text": " If you have a closer look at the paper in the video description, you will also find a"}, {"start": 207.24, "end": 213.2, "text": " nice evaluation section with plenty of comparisons to previous algorithms and according to the"}, {"start": 213.2, "end": 216.88, "text": " authors, the source code will be released soon as well."}, {"start": 216.88, "end": 222.56, "text": " As soon as it comes out, everyone will be able to dream up beautiful photorealistic images"}, {"start": 222.56, "end": 225.32, "text": " and get them out almost instantly."}, {"start": 225.32, "end": 226.95999999999998, "text": " Another time to be alive."}, {"start": 226.95999999999998, "end": 231.4, "text": " If you have enjoyed this episode and would like to support us, please click one of the Amazon"}, {"start": 231.4, "end": 235.72, "text": " affiliate links in the video description and buy something that you are looking to buy"}, {"start": 235.72, "end": 237.4, "text": " on Amazon anyway."}, {"start": 237.4, "end": 241.88, "text": " You don't lose anything, and this way we get a small kickback which is a great way to"}, {"start": 241.88, "end": 245.2, "text": " support the series so we can make better videos for you."}, {"start": 245.2, "end": 255.2, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=iKrrKyeSRew
How Do Neural Networks Memorize Text?
📝 The paper "Visualizing memorization in RNNs" is available here: https://distill.pub/2019/memorization-in-rnns/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejolna Ifehir. This is an article from the distal journal, so expect a lot of intuitive and beautiful visualizations. And it is about recurrent neural networks. These are neural network variants that are specialized to be able to deal with sequences of data. For instance, processing and completing text is a great example usage of these recurrent networks. So, why is that? Well, if we wish to finish a sentence, we are not only interested in the latest letter in this sentence, but several letters before that, and of course, the order of these letters is also of utmost importance. Here you can see with the green rectangles, which previous letters these recurrent neural networks memorize when reading and completing our sentences. LSTM stands for Long Short-Term Memory, and GRU means Gated Recurrent Unit, both are recurrent neural networks. And you see here that the nested LSTM doesn't really look back further than the current word we are processing, while the classic LSTM almost always memorizes a lengthy history of previous words. And now, look, interestingly, with GRU, when looking at the start of the word grammar here, we barely know anything about this new word, so it memorizes the entire previous word as it may be the most useful information we have at the time. And now, as we proceed a few more letters in this word, it mostly shifts its attention to a shorter segment, that is, the letters of this new word we are currently writing. Luckily, the paper is even more interactive, meaning that you can also add a piece of text here and see how the GRU network processes it. One of the main arguments of this paper is that when comparing these networks against each other in terms of quality, we shouldn't only look at the output text they generate. For instance, it is possible for two models that work quite differently to have a very similar accuracy and score on these tests. The author argues that we should look beyond these metrics and look at this kind of connectivity information as well. This way, we may find useful pieces of knowledge like the fact that GRU is better at utilizing longer-term contextual understanding. A really cool finding indeed, and I am sure this will also be a useful visualization tool when developing new algorithms and finding faults in previous ones. Love it! Thanks for watching and for your generous support, and I'll see you next time!
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejolna Ifehir."}, {"start": 4.32, "end": 10.96, "text": " This is an article from the distal journal, so expect a lot of intuitive and beautiful visualizations."}, {"start": 10.96, "end": 13.48, "text": " And it is about recurrent neural networks."}, {"start": 13.48, "end": 18.2, "text": " These are neural network variants that are specialized to be able to deal with sequences"}, {"start": 18.2, "end": 19.28, "text": " of data."}, {"start": 19.28, "end": 24.240000000000002, "text": " For instance, processing and completing text is a great example usage of these recurrent"}, {"start": 24.240000000000002, "end": 25.240000000000002, "text": " networks."}, {"start": 25.240000000000002, "end": 27.28, "text": " So, why is that?"}, {"start": 27.28, "end": 32.36, "text": " Well, if we wish to finish a sentence, we are not only interested in the latest letter"}, {"start": 32.36, "end": 38.64, "text": " in this sentence, but several letters before that, and of course, the order of these letters"}, {"start": 38.64, "end": 41.04, "text": " is also of utmost importance."}, {"start": 41.04, "end": 45.480000000000004, "text": " Here you can see with the green rectangles, which previous letters these recurrent neural"}, {"start": 45.480000000000004, "end": 49.36, "text": " networks memorize when reading and completing our sentences."}, {"start": 49.36, "end": 56.040000000000006, "text": " LSTM stands for Long Short-Term Memory, and GRU means Gated Recurrent Unit, both are recurrent"}, {"start": 56.04, "end": 57.04, "text": " neural networks."}, {"start": 57.04, "end": 62.2, "text": " And you see here that the nested LSTM doesn't really look back further than the current"}, {"start": 62.2, "end": 69.24, "text": " word we are processing, while the classic LSTM almost always memorizes a lengthy history"}, {"start": 69.24, "end": 70.92, "text": " of previous words."}, {"start": 70.92, "end": 76.24, "text": " And now, look, interestingly, with GRU, when looking at the start of the word grammar"}, {"start": 76.24, "end": 82.52, "text": " here, we barely know anything about this new word, so it memorizes the entire previous"}, {"start": 82.52, "end": 87.11999999999999, "text": " word as it may be the most useful information we have at the time."}, {"start": 87.11999999999999, "end": 92.52, "text": " And now, as we proceed a few more letters in this word, it mostly shifts its attention"}, {"start": 92.52, "end": 98.11999999999999, "text": " to a shorter segment, that is, the letters of this new word we are currently writing."}, {"start": 98.11999999999999, "end": 103.44, "text": " Luckily, the paper is even more interactive, meaning that you can also add a piece of"}, {"start": 103.44, "end": 109.52, "text": " text here and see how the GRU network processes it."}, {"start": 109.52, "end": 113.75999999999999, "text": " One of the main arguments of this paper is that when comparing these networks against"}, {"start": 113.75999999999999, "end": 119.84, "text": " each other in terms of quality, we shouldn't only look at the output text they generate."}, {"start": 119.84, "end": 125.56, "text": " For instance, it is possible for two models that work quite differently to have a very similar"}, {"start": 125.56, "end": 128.48, "text": " accuracy and score on these tests."}, {"start": 128.48, "end": 133.44, "text": " The author argues that we should look beyond these metrics and look at this kind of connectivity"}, {"start": 133.44, "end": 135.16, "text": " information as well."}, {"start": 135.16, "end": 140.79999999999998, "text": " This way, we may find useful pieces of knowledge like the fact that GRU is better at utilizing"}, {"start": 140.79999999999998, "end": 143.56, "text": " longer-term contextual understanding."}, {"start": 143.56, "end": 149.24, "text": " A really cool finding indeed, and I am sure this will also be a useful visualization tool"}, {"start": 149.24, "end": 153.56, "text": " when developing new algorithms and finding faults in previous ones."}, {"start": 153.56, "end": 154.56, "text": " Love it!"}, {"start": 154.56, "end": 166.96, "text": " Thanks for watching and for your generous support, and I'll see you next time!"}]
Two Minute Papers
https://www.youtube.com/watch?v=8ypnLjwpzK8
OpenAI GPT-2: An Almost Too Good Text Generator!
❤️ Support the show and pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 📝 The paper "Better Language Models and Their Implications" is available here: https://openai.com/blog/better-language-models/ GPT-2 Reddit bot: https://old.reddit.com/r/MachineLearning/comments/b3zlha/p_openais_gpt2based_reddit_bot_is_live/ Criticism: https://medium.com/@lowe.ryan.t/openais-gpt-2-the-model-the-hype-and-the-controversy-1109f4bfd5e8?sk=bc319cebc22fe0459574544828c84c6d The Bitter Lesson video: https://www.youtube.com/watch?v=wEgq6sT1uq8 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #OpenAI #GPT3 #GPT2
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This is an incredible paper from OpenAI in which the goal is to teach an AI to read a piece of text and perform common natural language processing operations on it, for instance, answering questions, completing text, reading comprehension, summarization, and more. And not only that, but additionally, the AI has to be able to perform these tasks with as little supervision as possible. This means that we seek to unleash the algorithm that they call GPT-2 to read the internet and learn the intricacies of our language by itself. To perform this, of course, we need a lot of training data and here the AI reads 40 gigabytes of internet text, which is 40 gigs of non-binary plain-text data, which is a stupendously large amount of text. It is always hard to put these big numbers in context. So as an example, to train similar text completion algorithms, AI people typically reach out to a text file containing every significant work of Shakespeare himself and this file is approximately five megabytes. So the 40 gigabytes basically means an amount of text that is 8,000 times the size of Shakespeare's works. That's a lot of text. And now, let's have a look at how it fares with the text completion part. This part was written by a human, quoting, in a shocking finding, scientists discovered a herd of unicorns living in a remote previously unexplored valley in the Andes mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. And the AI continued the text the following way, quoting a short snippet of it. The scientists named the population after their distinctive horn of its unicorn. These four horned silver-white unicorns were previously unknown to science. Whoa! Now note that this is clearly not perfect if there is even such a thing as a perfect continuation and it took 10 tries, which means that the algorithm was run 10 times and the best result was cherry-picked and recorded here. And despite all of these, this is a truly incredible result, especially given that the algorithm learns on its own. After giving it a piece of text, it can also answer questions in a quiet competent manner. Whereinat, later in this video, I will show you more of these examples and likely talk over them, so if you are curious, feel free to pause the video while you read the prompts and their completions. The validation part of the paper reveals that this method is able to achieve state-of-the-art results on several language modeling tasks and you can see here that we still shouldn't expect it to match a human in terms of reading comprehension, which is the question-answering test. More on that in a moment. So, there are plenty of natural language processing algorithms out there that can perform some of these tasks. In fact, some articles already stated that there is not much new here it's just the same problem, but stated in a more general manner and with more compute. Aha! It is not the first time that this happens. Remember our video by the name TheBitterLesson. I've put a link to it in the video description, but in case you missed it, let me quote how Richard Sutton addressed this situation. The bitter lesson is based on the historical observations that one, AI researchers have often tried to build knowledge into their agents, two, this always helps in the short term and is personally satisfying to the researcher, but three, in the long run it plateaus and even inhibits further progress and four, breakthrough progress eventually arise by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness and often incompletely digested because it's success over a favored human-centric approach. So, what is the big lesson here? Why is GPT2 so interesting? Well, big lesson number one is this is one of the clearer cases of what the quote was talking about, where we can do a whole lot given a lot of data and compute power and we don't need to insert too much additional knowledge into our algorithms. And lesson number two, as a result, this algorithm becomes quite general so it can perform more tasks than most other techniques. This is an amazing value proposition. I will also add that not every learning technique scales well when we add more compute. In fact, you can see here yourself that even GPT2 plateaus on the summarization task. Making sure that these learning algorithms scale well is a great contribution in and of itself and should not be taken for granted. There has been a fair bit of discussion on whether openAI should publish the entirety of this model. They opted to release a smaller part of the source code and noted that they are aware that the full model could be used for nefarious purposes. Why did they do this? What is the matter with everyone having an AI with a subhuman level reading comprehension? Well, so far we have only talked about quality. But another key part is quantity. And boy, are these learning methods superhuman in terms of quantity? Just imagine that they can write articles with a chosen topic and sentiment all day long and much quicker than human beings. Also note that the blueprint of the algorithm is described in the paper and the top tier research group is expected to be able to reproduce it. So does one release the full source code and models or not? This is a quite difficult question. We need to keep publishing both papers and source code to advance science, but we also have to find new ways to do it in an ethical manner. This needs more discussion and would definitely be worthy of a conference-style meeting. Or more. There is so much to talk about and so far we have really only scratched the surface, so make sure to have a look in the video description. I left a link to the paper and some more super interesting reading materials for you. Make sure to check them out. Also, just a quick comment on why this video came so late after the paper has appeared. Since there were a lot of feelings and intense discussion on whether the algorithm should be published or not, I was looking to wait until the dust settles and there is enough information out there to create a sufficiently informed video for you. This, of course, means that we are late to the party and missed out on a whole lot of views and revenue, but that's okay. In fact, that's what we'll keep doing going forward to make sure you get the highest quality information that I can provide. If you have enjoyed this episode and would like to help us, please consider supporting us on Patreon. Remember our model, the dollar a month is almost nothing, but it keeps the papers coming. And there are hundreds of papers on my reading list. As always, we are available through patreon.com, slash two-minute papers, and the link is also available in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 11.8, "text": " This is an incredible paper from OpenAI in which the goal is to teach an AI to read a piece of text"}, {"start": 11.8, "end": 15.4, "text": " and perform common natural language processing operations on it,"}, {"start": 15.4, "end": 23.400000000000002, "text": " for instance, answering questions, completing text, reading comprehension, summarization, and more."}, {"start": 23.400000000000002, "end": 28.6, "text": " And not only that, but additionally, the AI has to be able to perform these tasks"}, {"start": 28.6, "end": 31.6, "text": " with as little supervision as possible."}, {"start": 31.6, "end": 37.0, "text": " This means that we seek to unleash the algorithm that they call GPT-2"}, {"start": 37.0, "end": 42.400000000000006, "text": " to read the internet and learn the intricacies of our language by itself."}, {"start": 42.400000000000006, "end": 46.2, "text": " To perform this, of course, we need a lot of training data"}, {"start": 46.2, "end": 50.6, "text": " and here the AI reads 40 gigabytes of internet text,"}, {"start": 50.6, "end": 54.400000000000006, "text": " which is 40 gigs of non-binary plain-text data,"}, {"start": 54.400000000000006, "end": 57.2, "text": " which is a stupendously large amount of text."}, {"start": 57.2, "end": 60.800000000000004, "text": " It is always hard to put these big numbers in context."}, {"start": 60.800000000000004, "end": 65.0, "text": " So as an example, to train similar text completion algorithms,"}, {"start": 65.0, "end": 72.4, "text": " AI people typically reach out to a text file containing every significant work of Shakespeare himself"}, {"start": 72.4, "end": 76.60000000000001, "text": " and this file is approximately five megabytes."}, {"start": 76.60000000000001, "end": 80.2, "text": " So the 40 gigabytes basically means an amount of text"}, {"start": 80.2, "end": 84.60000000000001, "text": " that is 8,000 times the size of Shakespeare's works."}, {"start": 84.60000000000001, "end": 86.2, "text": " That's a lot of text."}, {"start": 86.2, "end": 90.4, "text": " And now, let's have a look at how it fares with the text completion part."}, {"start": 90.4, "end": 93.4, "text": " This part was written by a human, quoting,"}, {"start": 93.4, "end": 97.8, "text": " in a shocking finding, scientists discovered a herd of unicorns"}, {"start": 97.8, "end": 102.6, "text": " living in a remote previously unexplored valley in the Andes mountains."}, {"start": 102.6, "end": 108.60000000000001, "text": " Even more surprising to the researchers was the fact that the unicorns spoke perfect English."}, {"start": 108.60000000000001, "end": 112.2, "text": " And the AI continued the text the following way,"}, {"start": 112.2, "end": 114.2, "text": " quoting a short snippet of it."}, {"start": 114.2, "end": 119.4, "text": " The scientists named the population after their distinctive horn of its unicorn."}, {"start": 119.4, "end": 125.0, "text": " These four horned silver-white unicorns were previously unknown to science."}, {"start": 125.0, "end": 126.2, "text": " Whoa!"}, {"start": 126.2, "end": 129.0, "text": " Now note that this is clearly not perfect"}, {"start": 129.0, "end": 132.4, "text": " if there is even such a thing as a perfect continuation"}, {"start": 132.4, "end": 137.4, "text": " and it took 10 tries, which means that the algorithm was run 10 times"}, {"start": 137.4, "end": 141.0, "text": " and the best result was cherry-picked and recorded here."}, {"start": 141.0, "end": 145.2, "text": " And despite all of these, this is a truly incredible result,"}, {"start": 145.2, "end": 148.6, "text": " especially given that the algorithm learns on its own."}, {"start": 148.6, "end": 154.0, "text": " After giving it a piece of text, it can also answer questions in a quiet competent manner."}, {"start": 154.0, "end": 158.0, "text": " Whereinat, later in this video, I will show you more of these examples"}, {"start": 158.0, "end": 162.8, "text": " and likely talk over them, so if you are curious, feel free to pause the video"}, {"start": 162.8, "end": 165.4, "text": " while you read the prompts and their completions."}, {"start": 165.4, "end": 168.8, "text": " The validation part of the paper reveals that this method"}, {"start": 168.8, "end": 174.0, "text": " is able to achieve state-of-the-art results on several language modeling tasks"}, {"start": 174.0, "end": 177.8, "text": " and you can see here that we still shouldn't expect it to match a human"}, {"start": 177.8, "end": 181.8, "text": " in terms of reading comprehension, which is the question-answering test."}, {"start": 181.8, "end": 183.8, "text": " More on that in a moment."}, {"start": 183.8, "end": 188.0, "text": " So, there are plenty of natural language processing algorithms out there"}, {"start": 188.0, "end": 190.60000000000002, "text": " that can perform some of these tasks."}, {"start": 190.60000000000002, "end": 195.20000000000002, "text": " In fact, some articles already stated that there is not much new here"}, {"start": 195.2, "end": 201.2, "text": " it's just the same problem, but stated in a more general manner and with more compute."}, {"start": 201.2, "end": 202.2, "text": " Aha!"}, {"start": 202.2, "end": 204.6, "text": " It is not the first time that this happens."}, {"start": 204.6, "end": 208.0, "text": " Remember our video by the name TheBitterLesson."}, {"start": 208.0, "end": 210.2, "text": " I've put a link to it in the video description,"}, {"start": 210.2, "end": 215.2, "text": " but in case you missed it, let me quote how Richard Sutton addressed this situation."}, {"start": 215.2, "end": 219.2, "text": " The bitter lesson is based on the historical observations that one,"}, {"start": 219.2, "end": 223.6, "text": " AI researchers have often tried to build knowledge into their agents,"}, {"start": 223.6, "end": 229.4, "text": " two, this always helps in the short term and is personally satisfying to the researcher,"}, {"start": 229.4, "end": 235.79999999999998, "text": " but three, in the long run it plateaus and even inhibits further progress"}, {"start": 235.79999999999998, "end": 240.79999999999998, "text": " and four, breakthrough progress eventually arise by an opposing approach"}, {"start": 240.79999999999998, "end": 244.79999999999998, "text": " based on scaling computation by search and learning."}, {"start": 244.79999999999998, "end": 250.0, "text": " The eventual success is tinged with bitterness and often incompletely digested"}, {"start": 250.0, "end": 254.0, "text": " because it's success over a favored human-centric approach."}, {"start": 254.0, "end": 256.0, "text": " So, what is the big lesson here?"}, {"start": 256.0, "end": 259.0, "text": " Why is GPT2 so interesting?"}, {"start": 259.0, "end": 266.0, "text": " Well, big lesson number one is this is one of the clearer cases of what the quote was talking about,"}, {"start": 266.0, "end": 270.2, "text": " where we can do a whole lot given a lot of data and compute power"}, {"start": 270.2, "end": 275.0, "text": " and we don't need to insert too much additional knowledge into our algorithms."}, {"start": 275.0, "end": 280.0, "text": " And lesson number two, as a result, this algorithm becomes quite general"}, {"start": 280.0, "end": 284.0, "text": " so it can perform more tasks than most other techniques."}, {"start": 284.0, "end": 287.0, "text": " This is an amazing value proposition."}, {"start": 287.0, "end": 292.0, "text": " I will also add that not every learning technique scales well when we add more compute."}, {"start": 292.0, "end": 298.0, "text": " In fact, you can see here yourself that even GPT2 plateaus on the summarization task."}, {"start": 298.0, "end": 303.0, "text": " Making sure that these learning algorithms scale well is a great contribution"}, {"start": 303.0, "end": 306.0, "text": " in and of itself and should not be taken for granted."}, {"start": 306.0, "end": 312.0, "text": " There has been a fair bit of discussion on whether openAI should publish the entirety of this model."}, {"start": 312.0, "end": 317.0, "text": " They opted to release a smaller part of the source code and noted that they are aware"}, {"start": 317.0, "end": 320.0, "text": " that the full model could be used for nefarious purposes."}, {"start": 320.0, "end": 322.0, "text": " Why did they do this?"}, {"start": 322.0, "end": 328.0, "text": " What is the matter with everyone having an AI with a subhuman level reading comprehension?"}, {"start": 328.0, "end": 332.0, "text": " Well, so far we have only talked about quality."}, {"start": 332.0, "end": 335.0, "text": " But another key part is quantity."}, {"start": 335.0, "end": 340.0, "text": " And boy, are these learning methods superhuman in terms of quantity?"}, {"start": 340.0, "end": 345.0, "text": " Just imagine that they can write articles with a chosen topic and sentiment all day long"}, {"start": 345.0, "end": 348.0, "text": " and much quicker than human beings."}, {"start": 348.0, "end": 352.0, "text": " Also note that the blueprint of the algorithm is described in the paper"}, {"start": 352.0, "end": 357.0, "text": " and the top tier research group is expected to be able to reproduce it."}, {"start": 357.0, "end": 361.0, "text": " So does one release the full source code and models or not?"}, {"start": 361.0, "end": 364.0, "text": " This is a quite difficult question."}, {"start": 364.0, "end": 369.0, "text": " We need to keep publishing both papers and source code to advance science,"}, {"start": 369.0, "end": 374.0, "text": " but we also have to find new ways to do it in an ethical manner."}, {"start": 374.0, "end": 379.0, "text": " This needs more discussion and would definitely be worthy of a conference-style meeting."}, {"start": 379.0, "end": 380.0, "text": " Or more."}, {"start": 380.0, "end": 385.0, "text": " There is so much to talk about and so far we have really only scratched the surface,"}, {"start": 385.0, "end": 387.0, "text": " so make sure to have a look in the video description."}, {"start": 387.0, "end": 392.0, "text": " I left a link to the paper and some more super interesting reading materials for you."}, {"start": 392.0, "end": 394.0, "text": " Make sure to check them out."}, {"start": 394.0, "end": 399.0, "text": " Also, just a quick comment on why this video came so late after the paper has appeared."}, {"start": 399.0, "end": 405.0, "text": " Since there were a lot of feelings and intense discussion on whether the algorithm should be published or not,"}, {"start": 405.0, "end": 410.0, "text": " I was looking to wait until the dust settles and there is enough information out there"}, {"start": 410.0, "end": 413.0, "text": " to create a sufficiently informed video for you."}, {"start": 413.0, "end": 419.0, "text": " This, of course, means that we are late to the party and missed out on a whole lot of views and revenue,"}, {"start": 419.0, "end": 420.0, "text": " but that's okay."}, {"start": 420.0, "end": 427.0, "text": " In fact, that's what we'll keep doing going forward to make sure you get the highest quality information that I can provide."}, {"start": 427.0, "end": 430.0, "text": " If you have enjoyed this episode and would like to help us,"}, {"start": 430.0, "end": 432.0, "text": " please consider supporting us on Patreon."}, {"start": 432.0, "end": 438.0, "text": " Remember our model, the dollar a month is almost nothing, but it keeps the papers coming."}, {"start": 438.0, "end": 441.0, "text": " And there are hundreds of papers on my reading list."}, {"start": 441.0, "end": 446.0, "text": " As always, we are available through patreon.com, slash two-minute papers,"}, {"start": 446.0, "end": 449.0, "text": " and the link is also available in the video description."}, {"start": 449.0, "end": 477.0, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UoKXJzTYDpw
Why Are Cloth Simulations So Hard?
📝 The paper "I-Cloth: Incremental Collision Handling for GPU-Based Interactive Cloth Simulation" is available here: https://min-tang.github.io/home/ICloth/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Dominic Papers with Karojona Ifeher. Today we are going to talk a bit about the glory and the woes of cloth simulation programs. In these simulations we have several 3D models that are built from up to several hundreds of thousands of triangles. And as they move in time, the interesting part of the simulation is whenever collisions happen, however evaluating how these meshes collide is quite difficult and time consuming. Basically we have to tell an algorithm that we have a piece of cloth with a 100,000 connected triangles here and another one there and now have a look and tell me which collides with which and how they bend and change in response to these forces. And don't forget about friction and repulsive forces. Also please be accurate because every small error adds up over time and do it several times a second so we can have a look at the results interactively. Well this is a quite challenging problem and it takes quite long to compute so much so that 70 to 80% of the total time taken to perform the simulation is spent with collision handling. So how can we make it not take forever? Well one way would be to try to make sure that we can run this collision handling step on the graphics card. This is exactly what this work does and in order to do this we have to make sure that all of these evaluations can be performed in parallel. Of course this is easier said than done. Another difficulty is choosing the appropriate time steps. These simulations are run in a way that we check and resolve all of the collisions and then we can advance the simulation forward by a tiny amount. This amount is called a time step and choosing the appropriate time step has always been a challenge. You see if we set it to two large we will be done faster and compute less however we will almost certainly miss some collisions because we skipped over them. The simulation may end up in a state that is so incorrect that it is impossible to recover from and we have to throw the entire thing out. If we set it to two low we get a more robust simulation however it will take many hours to days to compute. To remedy this this technique is built in a way such that we can use larger time steps. That's excellent news. Also the collision computation part is now up to nine times faster and if we look at the class simulation as a whole that can be made over three times faster. As you see here this is especially nice because we can test how these garments react to our manipulations at eight to ten frames per second. If you have a closer look at the paper you will find another key observation which states that most of the time only a small sub region of the simulated cloth undergoes deformation due to response forces and this knowledge can be kept track of which contributed to cutting down the simulation time significantly. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.24, "text": " Dear Fellow Scholars, this is Dominic Papers with Karojona Ifeher."}, {"start": 4.24, "end": 10.24, "text": " Today we are going to talk a bit about the glory and the woes of cloth simulation programs."}, {"start": 10.24, "end": 17.52, "text": " In these simulations we have several 3D models that are built from up to several hundreds of thousands of triangles."}, {"start": 17.52, "end": 23.36, "text": " And as they move in time, the interesting part of the simulation is whenever collisions happen,"}, {"start": 23.36, "end": 29.12, "text": " however evaluating how these meshes collide is quite difficult and time consuming."}, {"start": 29.12, "end": 36.0, "text": " Basically we have to tell an algorithm that we have a piece of cloth with a 100,000 connected triangles here"}, {"start": 36.0, "end": 42.8, "text": " and another one there and now have a look and tell me which collides with which and how they bend"}, {"start": 42.8, "end": 48.32, "text": " and change in response to these forces. And don't forget about friction and repulsive forces."}, {"start": 48.96, "end": 56.64, "text": " Also please be accurate because every small error adds up over time and do it several times a second"}, {"start": 56.64, "end": 62.72, "text": " so we can have a look at the results interactively. Well this is a quite challenging problem"}, {"start": 62.72, "end": 70.08, "text": " and it takes quite long to compute so much so that 70 to 80% of the total time taken to perform"}, {"start": 70.08, "end": 76.16, "text": " the simulation is spent with collision handling. So how can we make it not take forever?"}, {"start": 76.88, "end": 82.24000000000001, "text": " Well one way would be to try to make sure that we can run this collision handling step on the"}, {"start": 82.24, "end": 88.24, "text": " graphics card. This is exactly what this work does and in order to do this we have to make sure"}, {"start": 88.24, "end": 95.11999999999999, "text": " that all of these evaluations can be performed in parallel. Of course this is easier said than done."}, {"start": 95.11999999999999, "end": 101.03999999999999, "text": " Another difficulty is choosing the appropriate time steps. These simulations are run in a way"}, {"start": 101.03999999999999, "end": 107.52, "text": " that we check and resolve all of the collisions and then we can advance the simulation forward"}, {"start": 107.52, "end": 113.84, "text": " by a tiny amount. This amount is called a time step and choosing the appropriate time step"}, {"start": 113.84, "end": 121.11999999999999, "text": " has always been a challenge. You see if we set it to two large we will be done faster and compute less"}, {"start": 121.11999999999999, "end": 126.56, "text": " however we will almost certainly miss some collisions because we skipped over them."}, {"start": 126.56, "end": 132.56, "text": " The simulation may end up in a state that is so incorrect that it is impossible to recover from"}, {"start": 132.56, "end": 139.44, "text": " and we have to throw the entire thing out. If we set it to two low we get a more robust simulation"}, {"start": 139.44, "end": 146.64000000000001, "text": " however it will take many hours to days to compute. To remedy this this technique is built in a way"}, {"start": 146.64000000000001, "end": 153.52, "text": " such that we can use larger time steps. That's excellent news. Also the collision computation part"}, {"start": 153.52, "end": 159.68, "text": " is now up to nine times faster and if we look at the class simulation as a whole that can be made"}, {"start": 159.68, "end": 166.24, "text": " over three times faster. As you see here this is especially nice because we can test how these"}, {"start": 166.24, "end": 172.0, "text": " garments react to our manipulations at eight to ten frames per second. If you have a closer look"}, {"start": 172.0, "end": 178.48000000000002, "text": " at the paper you will find another key observation which states that most of the time only a small"}, {"start": 178.48000000000002, "end": 184.64000000000001, "text": " sub region of the simulated cloth undergoes deformation due to response forces and this knowledge"}, {"start": 184.64, "end": 190.07999999999998, "text": " can be kept track of which contributed to cutting down the simulation time significantly."}, {"start": 190.08, "end": 219.92000000000002, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=wEgq6sT1uq8
A Bitter AI Lesson - Compute Reigns Supreme!
📝 The article "The Bitter Lesson" is available here: http://www.incompleteideas.net/IncIdeas/BitterLesson.html Nice twitter thread on this video: https://twitter.com/karoly_zsolnai/status/1114867598724931585 ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Bryan Learn, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Image sources: https://en.wikipedia.org/wiki/Ciphertext https://en.wikibooks.org/wiki/Foundations_of_Computer_Science/Encryption https://en.wikipedia.org/wiki/Letter_frequency https://commons.wikimedia.org/wiki/File:A-pigpen-message.svg Thumbnail background image credit: https://pixabay.com/images/id-1587673/ Video background: https://pixabay.com/hu/photos/olaszorsz%C3%A1g-hegyek-braies-t%C3%B3-t%C3%B3-1587287/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Zonai Fahir. Before we start, I'd like to tell you that this video is not about a paper and it is not going to be two minutes. Welcome to Two Minute Papers. This piece bears the name, The Bitter Lesson, and was written by Richard Sutton, a legendary Canadian researcher who has contributed a great deal to reinforcement learning research. And what a piece this is. It is a short article on how we should do research and ever since I read it, I couldn't stop thinking about it and as a result, I couldn't not make a video on this topic. We really have to talk about this. It takes less than five minutes to read, so before we talk about it, you can pause this video and click the link to it in the video description. So in this article, he makes two important observations. Number one, he argues that the best performing learning techniques are the ones that can leverage computation or in other words, methods that improve significantly as we add more compute power. Long ago, people tried to encode lots of human knowledge of strategies in their Go AI, but did not have enough compute power to make a truly great algorithm. And now we have AlphaGo, which contains minimal information about Go itself and it is better than the best human players in the world. And number two, he recommends that we try to put as few constraints on the problem as possible. He argues that we shouldn't try to rebuild the mind, but try to build a method that can capture arbitrary complexity and scale it up with hardware. Don't try to make it work like your brain, make something as general as possible and make sure it can leverage computation and it will come up with something that is way better than our brain. So in short, keep the problem general and don't encode your knowledge of the domain into your learning algorithms. The weight of this sentence is not to be underestimated because these seemingly simple observations sound really counterintuitive. This seemingly encourages us to do the exact opposite of what we are currently doing. Let me tell you why. I have phone memories of my early lectures I attended to in cryptography where we had a look at ciphertext. These are very much like encrypted messages that children like to write each other at school, which looks like nonsense for the unassuming teacher, but can be easily decoded by another child when provided with a key. This key describes which symbol corresponds to which letter. Let's assume that one symbol means one letter, but if we don't have any additional knowledge, this is still not an easy problem to crack. But in this course, soon, we coded up algorithms that were able to crack messages like this in less than a second. How exactly? Well, by inserting additional knowledge into the system. For instance, we know the relative frequency of each letter in every language. For instance, in English, the letter E is the most common by far and then comes T, A and the others. The fact that we are not seeing letters, but symbols, doesn't really matter because we just look at the most frequent symbol in the ciphertext and we immediately know that okay, that symbol is going to be the letter E and so on. See what we have done here? Just by inserting a tiny bit of knowledge, suddenly a very difficult problem turned into a trivial problem. So much so that anyone can implement this after their second cryptography lecture. And somehow Richard Sutton argues that we shouldn't do that. Doesn't that sound crazy? So what gives? Well, let me explain through an example from Light Transport Research that demonstrates his point. Past tracing is one of the first and simplest algorithms in the field, which in many regards is vastly inferior to metropolis Light Transport, which is a much smarter algorithm. However, with our current powerful graphics cards, we can compute so many more rays with past tracing that in many cases it wins over metropolis. In this case, compute rain supreme. The hardware scaling out muscles the smarts and we haven't even talked about how much easier it is for engineers to maintain and improve a simpler system. The area of natural language processing has many decades of research to teach machines how to understand, simplify, correct, or even generate text. After so many papers and handcrafted techniques, which insert our knowledge of linguistics into our techniques, who would have thought that open AI would be able to come up with a relatively simple neural network with so little prior knowledge that is able to write articles that sound remarkably lifelike. We will talk about this method in more detail in this series soon. And here comes the bitter lesson. Doing research the classical way of inserting knowledge into a solution is very satisfying. It feels right, it feels like doing research, progressing, and it makes it easy to show in a new paper what exactly the key contributions are. However, it may not be the most effective way forward. Quoting the article, I recommend that you pay close attention to this. The bitter lesson is based on the historical observations that one, AI researchers have often tried to build knowledge into their agents. Two, this always helps in the short term and is personally satisfying to the researcher. But three, in the long run, it plateaus and even inhibits further progress. And four, breakthrough progress eventually arrives by an opposing approach based on scaling computation by search and learning. The eventual success is tinged with bitterness and often incompletely digested because it's success over a favored human centric approach. In our cryptography problem from earlier, of course, the letter frequency solution and other linguistic tricks are clearly much, much better than a solution that doesn't know anything about the domain. Of course, however, when later we have 100 times faster hardware, this knowledge may actually inhibit finding a solution that is way, way better. This is why he also claims that we shouldn't try to build intelligence by modeling our brain in a computer simulation. It's not that the our brain approach doesn't work. It does, but on the short run, on the long run, we will be able to add more hardware to a learning algorithm and it will find more effective structures to solve problems and it will eventually out-muscle our handcrafted techniques. In short, this is the lesson. When facing a learning problem, keep your domain knowledge out of the solution and use more compute. More compute gives us more learning and more general formulations give us more chance to find something relevant. So, this is indeed a harsh lesson. This piece sparked great debates on Twitter. I have seen great points for and against this sentiment. What do you think? Let me know in the comments as everything in science, this piece should be subject to debate and criticism. And therefore, I'd love to read as many people's take on it as possible. And this piece has implications on my thinking as well. Please allow me to add three more personal notes that kept me up at night in the last few days. Note number one, the bottom line is whenever we build a new algorithm, we should always bear in mind which parts would be truly useful if we had 100 times the compute power that we have now. Note number two, a corollary of this thinking is that arguably hardware engineers who make this new and more powerful graphics cards may be contributing the very least as much to AI than most of AI research does. And note number three, to me, it feels like this almost implies that best is to join the big guys where all the best hardware is. I work in an amazing small to mid-size lab at the technical University of Vienna and in the last few years I have given relatively little consideration to the invitations from some of the more coveted and well-funded labs. Was it a mistake? Should I change that? I really don't know for sure. If for some reason you haven't read the piece at the start of the video, make sure to do it after watching this. It's really worth it. In the meantime, interestingly, the non-profit AI research lab OpenAI also established a four-profit or what they like to call Capped Profit Company to be able to compete with the other big guys like DeepMind and Facebook Reality Labs. I think Richard has a solid point here. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zonai Fahir."}, {"start": 4.32, "end": 8.92, "text": " Before we start, I'd like to tell you that this video is not about a paper and it is"}, {"start": 8.92, "end": 10.76, "text": " not going to be two minutes."}, {"start": 10.76, "end": 12.56, "text": " Welcome to Two Minute Papers."}, {"start": 12.56, "end": 17.88, "text": " This piece bears the name, The Bitter Lesson, and was written by Richard Sutton, a legendary"}, {"start": 17.88, "end": 23.080000000000002, "text": " Canadian researcher who has contributed a great deal to reinforcement learning research."}, {"start": 23.080000000000002, "end": 24.8, "text": " And what a piece this is."}, {"start": 24.8, "end": 29.76, "text": " It is a short article on how we should do research and ever since I read it, I couldn't"}, {"start": 29.76, "end": 35.28, "text": " stop thinking about it and as a result, I couldn't not make a video on this topic."}, {"start": 35.28, "end": 37.24, "text": " We really have to talk about this."}, {"start": 37.24, "end": 41.96, "text": " It takes less than five minutes to read, so before we talk about it, you can pause this"}, {"start": 41.96, "end": 45.8, "text": " video and click the link to it in the video description."}, {"start": 45.8, "end": 49.64, "text": " So in this article, he makes two important observations."}, {"start": 49.64, "end": 54.8, "text": " Number one, he argues that the best performing learning techniques are the ones that can"}, {"start": 54.8, "end": 61.36, "text": " leverage computation or in other words, methods that improve significantly as we add more"}, {"start": 61.36, "end": 62.8, "text": " compute power."}, {"start": 62.8, "end": 68.64, "text": " Long ago, people tried to encode lots of human knowledge of strategies in their Go AI,"}, {"start": 68.64, "end": 73.2, "text": " but did not have enough compute power to make a truly great algorithm."}, {"start": 73.2, "end": 79.16, "text": " And now we have AlphaGo, which contains minimal information about Go itself and it is better"}, {"start": 79.16, "end": 82.08, "text": " than the best human players in the world."}, {"start": 82.08, "end": 87.8, "text": " And number two, he recommends that we try to put as few constraints on the problem as"}, {"start": 87.8, "end": 89.12, "text": " possible."}, {"start": 89.12, "end": 94.24, "text": " He argues that we shouldn't try to rebuild the mind, but try to build a method that can"}, {"start": 94.24, "end": 99.03999999999999, "text": " capture arbitrary complexity and scale it up with hardware."}, {"start": 99.03999999999999, "end": 104.16, "text": " Don't try to make it work like your brain, make something as general as possible and make"}, {"start": 104.16, "end": 109.64, "text": " sure it can leverage computation and it will come up with something that is way better"}, {"start": 109.64, "end": 111.03999999999999, "text": " than our brain."}, {"start": 111.04, "end": 116.48, "text": " So in short, keep the problem general and don't encode your knowledge of the domain into"}, {"start": 116.48, "end": 118.52000000000001, "text": " your learning algorithms."}, {"start": 118.52000000000001, "end": 124.36000000000001, "text": " The weight of this sentence is not to be underestimated because these seemingly simple observations"}, {"start": 124.36000000000001, "end": 127.08000000000001, "text": " sound really counterintuitive."}, {"start": 127.08000000000001, "end": 132.76, "text": " This seemingly encourages us to do the exact opposite of what we are currently doing."}, {"start": 132.76, "end": 134.04000000000002, "text": " Let me tell you why."}, {"start": 134.04000000000002, "end": 138.96, "text": " I have phone memories of my early lectures I attended to in cryptography where we had"}, {"start": 138.96, "end": 141.24, "text": " a look at ciphertext."}, {"start": 141.24, "end": 146.36, "text": " These are very much like encrypted messages that children like to write each other at school,"}, {"start": 146.36, "end": 152.28, "text": " which looks like nonsense for the unassuming teacher, but can be easily decoded by another"}, {"start": 152.28, "end": 155.4, "text": " child when provided with a key."}, {"start": 155.4, "end": 159.64000000000001, "text": " This key describes which symbol corresponds to which letter."}, {"start": 159.64000000000001, "end": 165.08, "text": " Let's assume that one symbol means one letter, but if we don't have any additional knowledge,"}, {"start": 165.08, "end": 167.84, "text": " this is still not an easy problem to crack."}, {"start": 167.84, "end": 173.32, "text": " But in this course, soon, we coded up algorithms that were able to crack messages like this"}, {"start": 173.32, "end": 175.68, "text": " in less than a second."}, {"start": 175.68, "end": 176.92000000000002, "text": " How exactly?"}, {"start": 176.92000000000002, "end": 181.12, "text": " Well, by inserting additional knowledge into the system."}, {"start": 181.12, "end": 186.24, "text": " For instance, we know the relative frequency of each letter in every language."}, {"start": 186.24, "end": 192.8, "text": " For instance, in English, the letter E is the most common by far and then comes T, A"}, {"start": 192.8, "end": 194.28, "text": " and the others."}, {"start": 194.28, "end": 199.44, "text": " The fact that we are not seeing letters, but symbols, doesn't really matter because we"}, {"start": 199.44, "end": 204.32, "text": " just look at the most frequent symbol in the ciphertext and we immediately know that"}, {"start": 204.32, "end": 209.2, "text": " okay, that symbol is going to be the letter E and so on."}, {"start": 209.2, "end": 210.96, "text": " See what we have done here?"}, {"start": 210.96, "end": 216.68, "text": " Just by inserting a tiny bit of knowledge, suddenly a very difficult problem turned into"}, {"start": 216.68, "end": 218.56, "text": " a trivial problem."}, {"start": 218.56, "end": 223.68, "text": " So much so that anyone can implement this after their second cryptography lecture."}, {"start": 223.68, "end": 227.24, "text": " And somehow Richard Sutton argues that we shouldn't do that."}, {"start": 227.24, "end": 229.6, "text": " Doesn't that sound crazy?"}, {"start": 229.6, "end": 231.24, "text": " So what gives?"}, {"start": 231.24, "end": 236.12, "text": " Well, let me explain through an example from Light Transport Research that demonstrates"}, {"start": 236.12, "end": 237.6, "text": " his point."}, {"start": 237.6, "end": 242.64000000000001, "text": " Past tracing is one of the first and simplest algorithms in the field, which in many"}, {"start": 242.64000000000001, "end": 248.52, "text": " regards is vastly inferior to metropolis Light Transport, which is a much smarter algorithm."}, {"start": 248.52, "end": 253.88000000000002, "text": " However, with our current powerful graphics cards, we can compute so many more rays with"}, {"start": 253.88000000000002, "end": 258.6, "text": " past tracing that in many cases it wins over metropolis."}, {"start": 258.6, "end": 261.24, "text": " In this case, compute rain supreme."}, {"start": 261.24, "end": 266.92, "text": " The hardware scaling out muscles the smarts and we haven't even talked about how much easier"}, {"start": 266.92, "end": 272.08, "text": " it is for engineers to maintain and improve a simpler system."}, {"start": 272.08, "end": 277.64, "text": " The area of natural language processing has many decades of research to teach machines"}, {"start": 277.64, "end": 283.47999999999996, "text": " how to understand, simplify, correct, or even generate text."}, {"start": 283.47999999999996, "end": 288.56, "text": " After so many papers and handcrafted techniques, which insert our knowledge of linguistics"}, {"start": 288.56, "end": 293.88, "text": " into our techniques, who would have thought that open AI would be able to come up with a"}, {"start": 293.88, "end": 299.88, "text": " relatively simple neural network with so little prior knowledge that is able to write articles"}, {"start": 299.88, "end": 302.52, "text": " that sound remarkably lifelike."}, {"start": 302.52, "end": 306.59999999999997, "text": " We will talk about this method in more detail in this series soon."}, {"start": 306.6, "end": 309.16, "text": " And here comes the bitter lesson."}, {"start": 309.16, "end": 315.40000000000003, "text": " Doing research the classical way of inserting knowledge into a solution is very satisfying."}, {"start": 315.40000000000003, "end": 320.84000000000003, "text": " It feels right, it feels like doing research, progressing, and it makes it easy to show"}, {"start": 320.84000000000003, "end": 324.88, "text": " in a new paper what exactly the key contributions are."}, {"start": 324.88, "end": 328.92, "text": " However, it may not be the most effective way forward."}, {"start": 328.92, "end": 333.56, "text": " Quoting the article, I recommend that you pay close attention to this."}, {"start": 333.56, "end": 339.24, "text": " The bitter lesson is based on the historical observations that one, AI researchers have"}, {"start": 339.24, "end": 342.44, "text": " often tried to build knowledge into their agents."}, {"start": 342.44, "end": 348.68, "text": " Two, this always helps in the short term and is personally satisfying to the researcher."}, {"start": 348.68, "end": 354.68, "text": " But three, in the long run, it plateaus and even inhibits further progress."}, {"start": 354.68, "end": 360.68, "text": " And four, breakthrough progress eventually arrives by an opposing approach based on scaling"}, {"start": 360.68, "end": 363.64, "text": " computation by search and learning."}, {"start": 363.64, "end": 369.48, "text": " The eventual success is tinged with bitterness and often incompletely digested because it's"}, {"start": 369.48, "end": 373.16, "text": " success over a favored human centric approach."}, {"start": 373.16, "end": 378.12, "text": " In our cryptography problem from earlier, of course, the letter frequency solution and other"}, {"start": 378.12, "end": 383.84000000000003, "text": " linguistic tricks are clearly much, much better than a solution that doesn't know anything"}, {"start": 383.84000000000003, "end": 385.32, "text": " about the domain."}, {"start": 385.32, "end": 390.64, "text": " Of course, however, when later we have 100 times faster hardware,"}, {"start": 390.64, "end": 396.24, "text": " this knowledge may actually inhibit finding a solution that is way, way better."}, {"start": 396.24, "end": 401.4, "text": " This is why he also claims that we shouldn't try to build intelligence by modeling our brain"}, {"start": 401.4, "end": 403.44, "text": " in a computer simulation."}, {"start": 403.44, "end": 406.15999999999997, "text": " It's not that the our brain approach doesn't work."}, {"start": 406.15999999999997, "end": 411.68, "text": " It does, but on the short run, on the long run, we will be able to add more hardware"}, {"start": 411.68, "end": 417.2, "text": " to a learning algorithm and it will find more effective structures to solve problems and"}, {"start": 417.2, "end": 421.2, "text": " it will eventually out-muscle our handcrafted techniques."}, {"start": 421.2, "end": 423.32, "text": " In short, this is the lesson."}, {"start": 423.32, "end": 428.12, "text": " When facing a learning problem, keep your domain knowledge out of the solution and use"}, {"start": 428.12, "end": 429.59999999999997, "text": " more compute."}, {"start": 429.59999999999997, "end": 434.56, "text": " More compute gives us more learning and more general formulations give us more chance"}, {"start": 434.56, "end": 436.28, "text": " to find something relevant."}, {"start": 436.28, "end": 439.48, "text": " So, this is indeed a harsh lesson."}, {"start": 439.48, "end": 442.24, "text": " This piece sparked great debates on Twitter."}, {"start": 442.24, "end": 445.91999999999996, "text": " I have seen great points for and against this sentiment."}, {"start": 445.91999999999996, "end": 446.91999999999996, "text": " What do you think?"}, {"start": 446.92, "end": 451.08000000000004, "text": " Let me know in the comments as everything in science, this piece should be subject to"}, {"start": 451.08000000000004, "end": 452.96000000000004, "text": " debate and criticism."}, {"start": 452.96000000000004, "end": 457.36, "text": " And therefore, I'd love to read as many people's take on it as possible."}, {"start": 457.36, "end": 460.8, "text": " And this piece has implications on my thinking as well."}, {"start": 460.8, "end": 465.44, "text": " Please allow me to add three more personal notes that kept me up at night in the last few"}, {"start": 465.44, "end": 466.68, "text": " days."}, {"start": 466.68, "end": 471.68, "text": " Note number one, the bottom line is whenever we build a new algorithm, we should always"}, {"start": 471.68, "end": 477.8, "text": " bear in mind which parts would be truly useful if we had 100 times the compute power that"}, {"start": 477.8, "end": 479.52, "text": " we have now."}, {"start": 479.52, "end": 485.0, "text": " Note number two, a corollary of this thinking is that arguably hardware engineers who make"}, {"start": 485.0, "end": 490.56, "text": " this new and more powerful graphics cards may be contributing the very least as much to"}, {"start": 490.56, "end": 494.0, "text": " AI than most of AI research does."}, {"start": 494.0, "end": 499.56, "text": " And note number three, to me, it feels like this almost implies that best is to join the"}, {"start": 499.56, "end": 502.4, "text": " big guys where all the best hardware is."}, {"start": 502.4, "end": 507.96, "text": " I work in an amazing small to mid-size lab at the technical University of Vienna and in"}, {"start": 507.96, "end": 513.12, "text": " the last few years I have given relatively little consideration to the invitations from"}, {"start": 513.12, "end": 516.6, "text": " some of the more coveted and well-funded labs."}, {"start": 516.6, "end": 518.0, "text": " Was it a mistake?"}, {"start": 518.0, "end": 519.32, "text": " Should I change that?"}, {"start": 519.32, "end": 521.48, "text": " I really don't know for sure."}, {"start": 521.48, "end": 525.32, "text": " If for some reason you haven't read the piece at the start of the video, make sure to"}, {"start": 525.32, "end": 527.0, "text": " do it after watching this."}, {"start": 527.0, "end": 528.36, "text": " It's really worth it."}, {"start": 528.36, "end": 534.32, "text": " In the meantime, interestingly, the non-profit AI research lab OpenAI also established a"}, {"start": 534.32, "end": 539.6800000000001, "text": " four-profit or what they like to call Capped Profit Company to be able to compete with"}, {"start": 539.6800000000001, "end": 543.52, "text": " the other big guys like DeepMind and Facebook Reality Labs."}, {"start": 543.52, "end": 546.2, "text": " I think Richard has a solid point here."}, {"start": 546.2, "end": 567.9200000000001, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-jL2o_15s1E
Beautiful Gooey Simulations, Now 10 Times Faster
❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 📝 The paper "GPU Optimization of Material Point Methods" is available here: http://www.cemyuksel.com/research/papers/gpu_mpm.pdf 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karajor Naifahir. You know that I see a piece of fluid and I can't resist making videos about it. I just can't. Oh my goodness. Look at that. These animations were created using the material point method or MPM in short, which is a hybrid simulation method, which is able to simulate not only substances like water and honey, but it can also simulate snow, granular solids, cloth, and many, many other amazing things that you see here. Before you ask, the hybrid part means that it both uses particles and grids during the computations. Unfortunately, it is very computationally demanding, so it takes forever to get these simulations ready. And typically, in my simulations, after this step is done, I almost always find that the objects did not line up perfectly, so I can start the whole process again. Oh well. This technique has multiple stages, uses multiple data structures in many of them, and often we have to wait for the results of one stage to be able to proceed to the next. This is not that much of a problem if we seek to implement this on our processor, but it would be way, way faster if we could run it on the graphics card, as long as we implement these problems on them properly. However, due to these stages waiting for each other, it is immensely difficult to use the heavily parallel computing capabilities of the graphics card. So here you go. This technique enables running MPM on your graphics card efficiently, resulting in an up to 10 time improvement over previous works. As a result, this granulation scene has more than 6.5 million particles on a very fine grid and can be simulated in only around 40 seconds per frame. And not only that, but the numerical stability of this technique is also superior to previous works, and it is thereby able to correctly simulate how the individual grains interact in this block of sand. Here is a more detailed breakdown of the number of particles, grid resolutions, and the amount of computation time needed to simulate each step. I am currently in the middle of a monstrous fluid simulation project, and oh man, I wish I had these numbers for the computation time. This gelatin scene takes less than 7 seconds per frame to simulate with a similar number of particles. Look at that heavenly gooey thing. It probably tastes like strawberries. And if you enjoyed this video and you wish to help us teach more people about these amazing papers, please consider supporting us on Patreon. In return, we can offer you early access to these episodes, or you can also get your name in the video description of every episode as a key supporter. You can find us at patreon.com slash 2 Minute Papers. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.7, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karajor Naifahir."}, {"start": 4.7, "end": 9.540000000000001, "text": " You know that I see a piece of fluid and I can't resist making videos about it."}, {"start": 9.540000000000001, "end": 10.74, "text": " I just can't."}, {"start": 10.74, "end": 12.22, "text": " Oh my goodness."}, {"start": 12.22, "end": 13.700000000000001, "text": " Look at that."}, {"start": 13.700000000000001, "end": 18.92, "text": " These animations were created using the material point method or MPM in short, which is a"}, {"start": 18.92, "end": 24.400000000000002, "text": " hybrid simulation method, which is able to simulate not only substances like water and"}, {"start": 24.4, "end": 31.06, "text": " honey, but it can also simulate snow, granular solids, cloth, and many, many other amazing"}, {"start": 31.06, "end": 32.9, "text": " things that you see here."}, {"start": 32.9, "end": 38.1, "text": " Before you ask, the hybrid part means that it both uses particles and grids during the"}, {"start": 38.1, "end": 39.5, "text": " computations."}, {"start": 39.5, "end": 44.980000000000004, "text": " Unfortunately, it is very computationally demanding, so it takes forever to get these simulations"}, {"start": 44.980000000000004, "end": 45.980000000000004, "text": " ready."}, {"start": 45.980000000000004, "end": 51.34, "text": " And typically, in my simulations, after this step is done, I almost always find that the"}, {"start": 51.34, "end": 56.14, "text": " objects did not line up perfectly, so I can start the whole process again."}, {"start": 56.14, "end": 57.14, "text": " Oh well."}, {"start": 57.14, "end": 62.42, "text": " This technique has multiple stages, uses multiple data structures in many of them, and often"}, {"start": 62.42, "end": 67.74000000000001, "text": " we have to wait for the results of one stage to be able to proceed to the next."}, {"start": 67.74000000000001, "end": 72.22, "text": " This is not that much of a problem if we seek to implement this on our processor, but"}, {"start": 72.22, "end": 78.02000000000001, "text": " it would be way, way faster if we could run it on the graphics card, as long as we implement"}, {"start": 78.02000000000001, "end": 80.06, "text": " these problems on them properly."}, {"start": 80.06, "end": 85.14, "text": " However, due to these stages waiting for each other, it is immensely difficult to use"}, {"start": 85.14, "end": 89.3, "text": " the heavily parallel computing capabilities of the graphics card."}, {"start": 89.3, "end": 90.42, "text": " So here you go."}, {"start": 90.42, "end": 96.74000000000001, "text": " This technique enables running MPM on your graphics card efficiently, resulting in an up to 10"}, {"start": 96.74000000000001, "end": 99.66, "text": " time improvement over previous works."}, {"start": 99.66, "end": 105.82000000000001, "text": " As a result, this granulation scene has more than 6.5 million particles on a very fine"}, {"start": 105.82, "end": 119.58, "text": " grid and can be simulated in only around 40 seconds per frame."}, {"start": 119.58, "end": 124.89999999999999, "text": " And not only that, but the numerical stability of this technique is also superior to previous"}, {"start": 124.89999999999999, "end": 130.54, "text": " works, and it is thereby able to correctly simulate how the individual grains interact"}, {"start": 130.54, "end": 132.74, "text": " in this block of sand."}, {"start": 132.74, "end": 138.34, "text": " Here is a more detailed breakdown of the number of particles, grid resolutions, and the"}, {"start": 138.34, "end": 141.74, "text": " amount of computation time needed to simulate each step."}, {"start": 141.74, "end": 147.46, "text": " I am currently in the middle of a monstrous fluid simulation project, and oh man, I wish"}, {"start": 147.46, "end": 150.54000000000002, "text": " I had these numbers for the computation time."}, {"start": 150.54000000000002, "end": 155.58, "text": " This gelatin scene takes less than 7 seconds per frame to simulate with a similar number"}, {"start": 155.58, "end": 157.26000000000002, "text": " of particles."}, {"start": 157.26000000000002, "end": 159.5, "text": " Look at that heavenly gooey thing."}, {"start": 159.5, "end": 161.74, "text": " It probably tastes like strawberries."}, {"start": 161.74, "end": 165.78, "text": " And if you enjoyed this video and you wish to help us teach more people about these amazing"}, {"start": 165.78, "end": 169.3, "text": " papers, please consider supporting us on Patreon."}, {"start": 169.3, "end": 174.62, "text": " In return, we can offer you early access to these episodes, or you can also get your name"}, {"start": 174.62, "end": 178.5, "text": " in the video description of every episode as a key supporter."}, {"start": 178.5, "end": 182.54000000000002, "text": " You can find us at patreon.com slash 2 Minute Papers."}, {"start": 182.54, "end": 192.54, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=luwP75lPExo
NeuroSAT: An AI That Learned Solving Logic Problems
❤️ This video has been kindly supported by my friends at Arm Research. Check them out here! - http://bit.ly/2TqOWAu 📝 The paper "Learning a SAT Solver from Single-Bit Supervision" is available here: https://arxiv.org/abs/1802.03685 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bruno Brito, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Image an article sources: SAT: https://www.geeksforgeeks.org/2-satisfiability-2-sat-problem/ - Source: GeeksforGeeks NP-Completeness: https://en.wikipedia.org/wiki/List_of_NP-complete_problems Neural network image source: https://en.wikipedia.org/wiki/File:Neural_network_example.svg Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir. Please meet NeuroSet. The name of this technique tells us what it is about, the NeuroPART means that it is a neural network-based learning method, and the setPART means that it is able to solve satisfiability problems. This is a family of problems where we are given a logic formula and we have to decide whether these variables can be chosen in a way such that this expression comes out true. That of course sounds quite nebulous, so let's have a look at a simple example. This formula says that F is true if A is true and at the same time not B is true. So if we choose A to be true and B as false, this expression is also true or in other words this problem is satisfied. Having a good solution for set is already great for solving many problems involving logic, however the more interesting part is that it can help us solve an enormous set of other problems. For instance, once that involves graphs, describing people in social networks and many others that you see here. This can be done by performing something that mathematicians like to call polynomial time reduction or car production, which means that many other problems that seem completely different can be converted into a set problem. In short, if you can solve set well, you can solve all of these problems well. This is one of the amazing revelations I learned about during my mathematical curriculum. The only problem is that when trying to solve big and complex set problems, we can often not do much better than random guessing, which for some of the most nasty cases takes so long that it practically is never going to finish. And get this, interestingly, this work presents us with a neural network that is able to solve problems of this form, but not like this tiny, tiny baby problem, but much bigger ones. And this really shouldn't be possible. Here's why. To train a neural network, we require training data. The input is a problem definition and the output is whether this problem is satisfiable. And we can stop right here because here lies our problem. This doesn't really make any sense because we just said that it is difficult to solve big set problems. And here comes the catch. This neural network learns from set problems that are small enough to be solved by traditional handcrafted methods. We can create arbitrarily many training examples with these solvers, albeit these are all small ones. And that's not it, there are three key factors here that make this technique really work. One, it learns from only single bit supervision. This means that the output that we talked about is only yes or no. It isn't shown the solution itself. That's all the algorithm learns from. Two, when we request a solution from the neural network, it not only tells us the same binary yes, no answer, but it can go beyond that. And when the problem is satisfiable, it will almost always provide us with the exact solution. It is not only able to tell us whether the problem can be solved, but it almost always provides a possible solution as well. That is indeed remarkable. This image may be familiar from the thumbnail and here you can see how the neural networks in a representation of how these variables change over time as it sees a satisfiable or unsatisfiable problem and how it comes to its own conclusions. And three, when we ask the neural network for a solution, it is able to defeat problems that are larger and more difficult than the ones it has trained on. So, this means that we train it on simple problems that we can solve ourselves and using these as training data, we will be able to solve much harder problems that we can't solve ourselves. This is crucial because otherwise, this neural network would only be as good as the handcrafted algorithm used to train it, which in other words is not useful at all. Isn't this amazing? I will note that there are handcrafted algorithms that are able to match and often outperform neural set. However, these took decades of research work to invent, whereas this is a learning-based technique that just looks at as little information as the problem definition and whether it is satisfiable and it is able to come up with a damn good algorithm by itself. What a time to be alive! This video has been kindly supported by my friends at ARM Research. Make sure to check them out through the link in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir."}, {"start": 4.32, "end": 6.32, "text": " Please meet NeuroSet."}, {"start": 6.32, "end": 10.84, "text": " The name of this technique tells us what it is about, the NeuroPART means that it is"}, {"start": 10.84, "end": 16.52, "text": " a neural network-based learning method, and the setPART means that it is able to solve"}, {"start": 16.52, "end": 18.88, "text": " satisfiability problems."}, {"start": 18.88, "end": 24.240000000000002, "text": " This is a family of problems where we are given a logic formula and we have to decide whether"}, {"start": 24.24, "end": 30.36, "text": " these variables can be chosen in a way such that this expression comes out true."}, {"start": 30.36, "end": 35.28, "text": " That of course sounds quite nebulous, so let's have a look at a simple example."}, {"start": 35.28, "end": 42.92, "text": " This formula says that F is true if A is true and at the same time not B is true."}, {"start": 42.92, "end": 49.239999999999995, "text": " So if we choose A to be true and B as false, this expression is also true or in other"}, {"start": 49.239999999999995, "end": 52.16, "text": " words this problem is satisfied."}, {"start": 52.16, "end": 57.879999999999995, "text": " Having a good solution for set is already great for solving many problems involving logic,"}, {"start": 57.879999999999995, "end": 63.519999999999996, "text": " however the more interesting part is that it can help us solve an enormous set of other"}, {"start": 63.519999999999996, "end": 64.52, "text": " problems."}, {"start": 64.52, "end": 69.72, "text": " For instance, once that involves graphs, describing people in social networks and many others"}, {"start": 69.72, "end": 71.28, "text": " that you see here."}, {"start": 71.28, "end": 76.19999999999999, "text": " This can be done by performing something that mathematicians like to call polynomial time"}, {"start": 76.19999999999999, "end": 81.96, "text": " reduction or car production, which means that many other problems that seem completely"}, {"start": 81.96, "end": 85.6, "text": " different can be converted into a set problem."}, {"start": 85.6, "end": 90.83999999999999, "text": " In short, if you can solve set well, you can solve all of these problems well."}, {"start": 90.83999999999999, "end": 96.08, "text": " This is one of the amazing revelations I learned about during my mathematical curriculum."}, {"start": 96.08, "end": 101.75999999999999, "text": " The only problem is that when trying to solve big and complex set problems, we can often"}, {"start": 101.75999999999999, "end": 107.83999999999999, "text": " not do much better than random guessing, which for some of the most nasty cases takes"}, {"start": 107.84, "end": 111.92, "text": " so long that it practically is never going to finish."}, {"start": 111.92, "end": 117.28, "text": " And get this, interestingly, this work presents us with a neural network that is able to"}, {"start": 117.28, "end": 123.36, "text": " solve problems of this form, but not like this tiny, tiny baby problem, but much bigger"}, {"start": 123.36, "end": 124.84, "text": " ones."}, {"start": 124.84, "end": 127.64, "text": " And this really shouldn't be possible."}, {"start": 127.64, "end": 128.64000000000001, "text": " Here's why."}, {"start": 128.64000000000001, "end": 132.28, "text": " To train a neural network, we require training data."}, {"start": 132.28, "end": 137.8, "text": " The input is a problem definition and the output is whether this problem is satisfiable."}, {"start": 137.8, "end": 142.96, "text": " And we can stop right here because here lies our problem."}, {"start": 142.96, "end": 147.52, "text": " This doesn't really make any sense because we just said that it is difficult to solve"}, {"start": 147.52, "end": 149.32000000000002, "text": " big set problems."}, {"start": 149.32000000000002, "end": 151.04000000000002, "text": " And here comes the catch."}, {"start": 151.04000000000002, "end": 157.0, "text": " This neural network learns from set problems that are small enough to be solved by traditional"}, {"start": 157.0, "end": 158.68, "text": " handcrafted methods."}, {"start": 158.68, "end": 164.60000000000002, "text": " We can create arbitrarily many training examples with these solvers, albeit these are all small"}, {"start": 164.60000000000002, "end": 165.60000000000002, "text": " ones."}, {"start": 165.6, "end": 171.0, "text": " And that's not it, there are three key factors here that make this technique really work."}, {"start": 171.0, "end": 174.64, "text": " One, it learns from only single bit supervision."}, {"start": 174.64, "end": 179.4, "text": " This means that the output that we talked about is only yes or no."}, {"start": 179.4, "end": 181.79999999999998, "text": " It isn't shown the solution itself."}, {"start": 181.79999999999998, "end": 184.0, "text": " That's all the algorithm learns from."}, {"start": 184.0, "end": 189.51999999999998, "text": " Two, when we request a solution from the neural network, it not only tells us the same"}, {"start": 189.51999999999998, "end": 193.56, "text": " binary yes, no answer, but it can go beyond that."}, {"start": 193.56, "end": 198.52, "text": " And when the problem is satisfiable, it will almost always provide us with the exact"}, {"start": 198.52, "end": 199.72, "text": " solution."}, {"start": 199.72, "end": 204.84, "text": " It is not only able to tell us whether the problem can be solved, but it almost always"}, {"start": 204.84, "end": 207.68, "text": " provides a possible solution as well."}, {"start": 207.68, "end": 210.28, "text": " That is indeed remarkable."}, {"start": 210.28, "end": 214.92000000000002, "text": " This image may be familiar from the thumbnail and here you can see how the neural networks"}, {"start": 214.92000000000002, "end": 221.12, "text": " in a representation of how these variables change over time as it sees a satisfiable"}, {"start": 221.12, "end": 226.24, "text": " or unsatisfiable problem and how it comes to its own conclusions."}, {"start": 226.24, "end": 231.6, "text": " And three, when we ask the neural network for a solution, it is able to defeat problems"}, {"start": 231.6, "end": 236.52, "text": " that are larger and more difficult than the ones it has trained on."}, {"start": 236.52, "end": 242.52, "text": " So, this means that we train it on simple problems that we can solve ourselves and using"}, {"start": 242.52, "end": 248.04000000000002, "text": " these as training data, we will be able to solve much harder problems that we can't"}, {"start": 248.04000000000002, "end": 249.56, "text": " solve ourselves."}, {"start": 249.56, "end": 255.52, "text": " This is crucial because otherwise, this neural network would only be as good as the handcrafted"}, {"start": 255.52, "end": 260.2, "text": " algorithm used to train it, which in other words is not useful at all."}, {"start": 260.2, "end": 262.16, "text": " Isn't this amazing?"}, {"start": 262.16, "end": 267.8, "text": " I will note that there are handcrafted algorithms that are able to match and often outperform"}, {"start": 267.8, "end": 269.12, "text": " neural set."}, {"start": 269.12, "end": 274.44, "text": " However, these took decades of research work to invent, whereas this is a learning-based"}, {"start": 274.44, "end": 280.04, "text": " technique that just looks at as little information as the problem definition and whether it is"}, {"start": 280.04, "end": 285.96, "text": " satisfiable and it is able to come up with a damn good algorithm by itself."}, {"start": 285.96, "end": 287.88, "text": " What a time to be alive!"}, {"start": 287.88, "end": 291.8, "text": " This video has been kindly supported by my friends at ARM Research."}, {"start": 291.8, "end": 294.96, "text": " Make sure to check them out through the link in the video description."}, {"start": 294.96, "end": 305.35999999999996, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=dvzlvHNxdfI
This AI Learned to “Photoshop” Human Faces
❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 📝 The paper "SC-FEGAN: Face Editing Generative Adversarial Network with User's Sketch and Color" is available here: https://arxiv.org/abs/1902.06838 https://github.com/JoYoungjoo/SC-FEGAN 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers humbnail background image credit: https://pixabay.com/images/id-3404534/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojolna Ifehir. In this series, we talk quite a bit about neural network-based learning methods that are able to generate new images for us from some sort of sparse description, like a written sentence, or a set of controllable parameters. These can enable us mere mortals without artistic skills to come up with novel images. However, one thing that comes up with almost every single one of these techniques is the lack of artistic control. You see, if we provide a very coarse input, there are many many different ways for the neural networks to create photorealistic images from them. So how do we get more control over these results? An earlier paper from Envidia generated human faces for us and used a latent space technique that allows us some more fine-grained control over the images. It is beyond amazing. But these are called latent variables because they represent the inner working process of the neural network and they do not exactly map to our intuition of facial features in reality. And now, have a look at this new technique that allows us to edit the geometry of the jawline of a person, put a smile on someone's face in a more peaceful way than seen in some Batman movies, or remove the sunglasses and add some crazy hair at the same time. Even changing the hair of someone while adding an earring with a prescribed shape is also possible. Whoa! And I just keep talking and talking about artistic control so it's great that these shapes are supported, but what about another important aspect of artistic control, for instance, colors? Yep, that is also supported. Here you can see that the color of the woman's eyes can be changed and the technique also understands the concept of makeup as well. How cool is that? Not only that, but it is also blazing fast. It takes roughly 50 milliseconds to create these images with the resolution of 512 by 512, so in short, we can do this about 20 times per second. Make sure to have a look at the paper that also contains a validation section against other techniques and reference results. This is out there is such a thing as a reference result in this case, which is really cool, and you will also find a novel style-loss formulation that makes all this crazy wizardry happen. No web app for this one, however, the source code is available free of charge and under a permissive license, so let the experiments begin. If you have enjoyed this video and you feel that a bunch of these videos are worth $3 a month, please consider supporting us on Patreon. In return, we can offer you early access to these episodes and more to keep your paper addiction in check. It is truly a privilege for me to be able to keep making these videos I am really enjoying the journey and this is only possible because of your support on Patreon. This is why every episode ends with, you guessed it right, thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karojolna Ifehir."}, {"start": 4.5600000000000005, "end": 9.24, "text": " In this series, we talk quite a bit about neural network-based learning methods that are"}, {"start": 9.24, "end": 14.200000000000001, "text": " able to generate new images for us from some sort of sparse description, like a written"}, {"start": 14.200000000000001, "end": 18.32, "text": " sentence, or a set of controllable parameters."}, {"start": 18.32, "end": 23.72, "text": " These can enable us mere mortals without artistic skills to come up with novel images."}, {"start": 23.72, "end": 28.88, "text": " However, one thing that comes up with almost every single one of these techniques is the"}, {"start": 28.88, "end": 31.119999999999997, "text": " lack of artistic control."}, {"start": 31.119999999999997, "end": 36.2, "text": " You see, if we provide a very coarse input, there are many many different ways for the neural"}, {"start": 36.2, "end": 39.239999999999995, "text": " networks to create photorealistic images from them."}, {"start": 39.239999999999995, "end": 42.76, "text": " So how do we get more control over these results?"}, {"start": 42.76, "end": 48.68, "text": " An earlier paper from Envidia generated human faces for us and used a latent space technique"}, {"start": 48.68, "end": 52.92, "text": " that allows us some more fine-grained control over the images."}, {"start": 52.92, "end": 55.36, "text": " It is beyond amazing."}, {"start": 55.36, "end": 60.32, "text": " But these are called latent variables because they represent the inner working process"}, {"start": 60.32, "end": 65.64, "text": " of the neural network and they do not exactly map to our intuition of facial features in"}, {"start": 65.64, "end": 66.96, "text": " reality."}, {"start": 66.96, "end": 72.0, "text": " And now, have a look at this new technique that allows us to edit the geometry of the jawline"}, {"start": 72.0, "end": 78.4, "text": " of a person, put a smile on someone's face in a more peaceful way than seen in some Batman"}, {"start": 78.4, "end": 85.32, "text": " movies, or remove the sunglasses and add some crazy hair at the same time."}, {"start": 85.32, "end": 90.36, "text": " Even changing the hair of someone while adding an earring with a prescribed shape is also"}, {"start": 90.36, "end": 91.36, "text": " possible."}, {"start": 91.36, "end": 93.44, "text": " Whoa!"}, {"start": 93.44, "end": 98.16, "text": " And I just keep talking and talking about artistic control so it's great that these shapes"}, {"start": 98.16, "end": 104.56, "text": " are supported, but what about another important aspect of artistic control, for instance, colors?"}, {"start": 104.56, "end": 107.47999999999999, "text": " Yep, that is also supported."}, {"start": 107.47999999999999, "end": 113.39999999999999, "text": " Here you can see that the color of the woman's eyes can be changed and the technique also understands"}, {"start": 113.4, "end": 116.12, "text": " the concept of makeup as well."}, {"start": 116.12, "end": 118.12, "text": " How cool is that?"}, {"start": 118.12, "end": 120.96000000000001, "text": " Not only that, but it is also blazing fast."}, {"start": 120.96000000000001, "end": 127.48, "text": " It takes roughly 50 milliseconds to create these images with the resolution of 512 by 512,"}, {"start": 127.48, "end": 131.12, "text": " so in short, we can do this about 20 times per second."}, {"start": 131.12, "end": 135.56, "text": " Make sure to have a look at the paper that also contains a validation section against other"}, {"start": 135.56, "end": 138.88, "text": " techniques and reference results."}, {"start": 138.88, "end": 144.0, "text": " This is out there is such a thing as a reference result in this case, which is really cool,"}, {"start": 144.0, "end": 150.44, "text": " and you will also find a novel style-loss formulation that makes all this crazy wizardry happen."}, {"start": 150.44, "end": 155.32, "text": " No web app for this one, however, the source code is available free of charge and under"}, {"start": 155.32, "end": 159.24, "text": " a permissive license, so let the experiments begin."}, {"start": 159.24, "end": 164.0, "text": " If you have enjoyed this video and you feel that a bunch of these videos are worth $3"}, {"start": 164.0, "end": 167.32, "text": " a month, please consider supporting us on Patreon."}, {"start": 167.32, "end": 172.44, "text": " In return, we can offer you early access to these episodes and more to keep your paper"}, {"start": 172.44, "end": 173.44, "text": " addiction in check."}, {"start": 173.44, "end": 178.64, "text": " It is truly a privilege for me to be able to keep making these videos I am really enjoying"}, {"start": 178.64, "end": 183.44, "text": " the journey and this is only possible because of your support on Patreon."}, {"start": 183.44, "end": 187.95999999999998, "text": " This is why every episode ends with, you guessed it right, thanks for watching and for"}, {"start": 187.96, "end": 199.12, "text": " your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=C7Dmu0GtrSw
Google’s PlaNet AI Learns Planning from Pixels
Errata: https://twitter.com/arjunbazinga/status/1114497224174497793 📝 The paper "Learning Latent Dynamics for Planning from Pixels" and its source code is available here: https://planetrl.github.io/ https://arxiv.org/abs/1811.04551 https://github.com/google-research/planet ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #Google #PlaNet #PlaNetAI
Dear Fellow Scholars, this is 2 Minute Papers with Karojol Naifahir. Today, we are going to talk about Planet, a technique that is meant to solve challenging image-based planning tasks with sparse rewards. Okay, that sounds great, but what do all of these terms mean? The planning part is simple. It means that the AI has to come up with a sequence of actions to achieve a goal, like pole balancing with a cart, teaching a virtual human or cheetah to walk, or hitting this box the right way to make sure it keeps rotating. The image-based part is big. This means that the AI has to learn the same way as a human, and that is, by looking at the pixels of the images. This is a huge difficulty bump, because the AI does not only have to learn to defeat the game itself, but also has to build an understanding of the visual concepts within the game. Deep minds legendary deep-queue learning algorithm was able to learn from pixel inputs, but it was mighty inefficient at doing that, and no wonder this problem formulation is immensely hard, and it is a miracle that we can master any solution at all that can figure it out. The sparse reward part means that we rarely get feedback as to how well we are doing at these tasks, which is a nightmare situation for any learning algorithm. The key difference with this technique against classical reinforcement learning, which is what most researchers reach out to to solve similar tasks, is that this one uses models for the planning. This means that it does not learn every new task from scratch, but after the first game, whichever it may be, it will have a rudimentary understanding of gravity and dynamics, and it will be able to reuse this knowledge in the next games. As a result, it will get a head start when learning a new game, and is therefore often 50 times more efficient than the previous technique that learns from scratch, and not only that, but it has other really cool advantages as well, which I will tell you about in just a moment. Here you can see that indeed, the blue lines significantly outperform the previous techniques shown with red and green for each of these tasks. I like how the plot is organized in the same grid as the tasks were, as it makes it much more readable when juxtaposed with the video footage. As promised, here are the two really cool additional advantages of this model-based agent. The first is that we don't have to train six separate AIs for all of these tasks, but finally, we can get one AI that is able to solve all six of these tasks efficiently. And second, it can look at as little as five frames of an animation, which is approximately one fifth of a second worth of footage that is barely anything, and it is able to predict how the sequence would continue with a remarkably high accuracy, and over a long time frame, which is quite a challenge. This is an excellent paper with beautiful mathematical formulations. I recommend that you have a look in the video description. The source code is also available free of charge for everyone, so I bet this will be an exciting direction for future research works, and I'll be here to report on it to you. Make sure to subscribe and hit the bell icon to not miss future episodes. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karojol Naifahir."}, {"start": 4.4, "end": 9.64, "text": " Today, we are going to talk about Planet, a technique that is meant to solve challenging"}, {"start": 9.64, "end": 13.0, "text": " image-based planning tasks with sparse rewards."}, {"start": 13.0, "end": 17.32, "text": " Okay, that sounds great, but what do all of these terms mean?"}, {"start": 17.32, "end": 19.2, "text": " The planning part is simple."}, {"start": 19.2, "end": 24.16, "text": " It means that the AI has to come up with a sequence of actions to achieve a goal, like"}, {"start": 24.16, "end": 31.0, "text": " pole balancing with a cart, teaching a virtual human or cheetah to walk, or hitting this box"}, {"start": 31.0, "end": 34.6, "text": " the right way to make sure it keeps rotating."}, {"start": 34.6, "end": 36.8, "text": " The image-based part is big."}, {"start": 36.8, "end": 41.68, "text": " This means that the AI has to learn the same way as a human, and that is, by looking at"}, {"start": 41.68, "end": 43.72, "text": " the pixels of the images."}, {"start": 43.72, "end": 48.32, "text": " This is a huge difficulty bump, because the AI does not only have to learn to defeat"}, {"start": 48.32, "end": 53.72, "text": " the game itself, but also has to build an understanding of the visual concepts within"}, {"start": 53.72, "end": 54.72, "text": " the game."}, {"start": 54.72, "end": 60.12, "text": " Deep minds legendary deep-queue learning algorithm was able to learn from pixel inputs, but"}, {"start": 60.12, "end": 65.84, "text": " it was mighty inefficient at doing that, and no wonder this problem formulation is immensely"}, {"start": 65.84, "end": 71.64, "text": " hard, and it is a miracle that we can master any solution at all that can figure it out."}, {"start": 71.64, "end": 76.4, "text": " The sparse reward part means that we rarely get feedback as to how well we are doing at"}, {"start": 76.4, "end": 80.8, "text": " these tasks, which is a nightmare situation for any learning algorithm."}, {"start": 80.8, "end": 84.84, "text": " The key difference with this technique against classical reinforcement learning, which"}, {"start": 84.84, "end": 90.47999999999999, "text": " is what most researchers reach out to to solve similar tasks, is that this one uses models"}, {"start": 90.47999999999999, "end": 91.88, "text": " for the planning."}, {"start": 91.88, "end": 96.75999999999999, "text": " This means that it does not learn every new task from scratch, but after the first game,"}, {"start": 96.75999999999999, "end": 102.47999999999999, "text": " whichever it may be, it will have a rudimentary understanding of gravity and dynamics, and"}, {"start": 102.47999999999999, "end": 106.2, "text": " it will be able to reuse this knowledge in the next games."}, {"start": 106.2, "end": 110.96000000000001, "text": " As a result, it will get a head start when learning a new game, and is therefore often"}, {"start": 110.96000000000001, "end": 116.4, "text": " 50 times more efficient than the previous technique that learns from scratch, and not only"}, {"start": 116.4, "end": 121.84, "text": " that, but it has other really cool advantages as well, which I will tell you about in just"}, {"start": 121.84, "end": 122.92, "text": " a moment."}, {"start": 122.92, "end": 127.36, "text": " Here you can see that indeed, the blue lines significantly outperform the previous"}, {"start": 127.36, "end": 131.76, "text": " techniques shown with red and green for each of these tasks."}, {"start": 131.76, "end": 136.95999999999998, "text": " I like how the plot is organized in the same grid as the tasks were, as it makes it much"}, {"start": 136.95999999999998, "end": 140.23999999999998, "text": " more readable when juxtaposed with the video footage."}, {"start": 140.23999999999998, "end": 144.67999999999998, "text": " As promised, here are the two really cool additional advantages of this model-based"}, {"start": 144.67999999999998, "end": 145.67999999999998, "text": " agent."}, {"start": 145.67999999999998, "end": 151.12, "text": " The first is that we don't have to train six separate AIs for all of these tasks, but"}, {"start": 151.12, "end": 157.56, "text": " finally, we can get one AI that is able to solve all six of these tasks efficiently."}, {"start": 157.56, "end": 162.84, "text": " And second, it can look at as little as five frames of an animation, which is approximately"}, {"start": 162.84, "end": 168.48, "text": " one fifth of a second worth of footage that is barely anything, and it is able to predict"}, {"start": 168.48, "end": 174.76, "text": " how the sequence would continue with a remarkably high accuracy, and over a long time frame,"}, {"start": 174.76, "end": 176.64000000000001, "text": " which is quite a challenge."}, {"start": 176.64000000000001, "end": 180.48000000000002, "text": " This is an excellent paper with beautiful mathematical formulations."}, {"start": 180.48000000000002, "end": 183.44, "text": " I recommend that you have a look in the video description."}, {"start": 183.44, "end": 188.72, "text": " The source code is also available free of charge for everyone, so I bet this will be an exciting"}, {"start": 188.72, "end": 193.64, "text": " direction for future research works, and I'll be here to report on it to you."}, {"start": 193.64, "end": 197.6, "text": " Make sure to subscribe and hit the bell icon to not miss future episodes."}, {"start": 197.6, "end": 227.56, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=cD-eXjf854Q
DeepMind: The Hanabi Card Game Is the Next Frontier for AI Research
📝 The paper "The Hanabi Challenge: A New Frontier for AI Research" and a blog post is available here: https://arxiv.org/abs/1902.00506 http://www.marcgbellemare.info/blog/a-cooperative-benchmark-announcing-the-hanabi-learning-environment/ ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-591631/ The Hanabi card game images are owned by the published and one image by Drentsoft Media - https://www.youtube.com/watch?v=Ofzg71qHh8k Poker image: https://commons.wikimedia.org/wiki/File:Two_poker_cards_and_poker_chips_20170611.jpg Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepMind #Hanabi #AI
Dear Fellow Scholars, this is Two Minute Papers with Karo Zonaifahir. Now get this, after defeating chess, Go and making incredible progress in Starcraft 2, scientists at DeepMind just published a paper where they claim that Hanabi is the next frontier in AI research. And we shall stop right here. I hear you asking me, Karo, after defeating all of these immensely difficult games, now you're trying to tell me that somehow, this silly card game is the next step? Yes, that's exactly what I'm saying. Let me explain. Hanabi is a card game where two to five players cooperate to build five card sequences and to do that, they are only allowed to exchange very little information. This is also an imperfect information game, which means the players don't have all the knowledge available needed to make a good decision. They have to work with what they have and try to infer the rest. For instance, poker is also an imperfect information game because we don't see the cards of the other players and the game revolves around our guesses as to what they might have. In Hanabi, interestingly, it is the other way around. So we see the cards of the other players, but not our own ones. The players have to work around this limitation by relying on each other and working out communication protocols and infer intent in order to win the game. Like in many of the best games, these simple rules conceal a vast array of strategies, all of which are extremely hard to teach to current learning algorithms. In the paper, a free and open source system is proposed to facilitate further research works and assess the performance of currently existing techniques. The difficulty level of this game can also be made easier or harder at will from both inside and outside the game. And by inside, I mean that we can set parameters like the number of allowed mistakes that can be made before the game is considered lost. The outside part means that two main game settings are proposed. One, self-play, this is the easier case where the AI plays with copies of itself, therefore it knows quite a bit about its teammates and two, ad hoc teams can also be constructed, which means that a set of agents need to cooperate that are not familiar with each other. This is immensely difficult. When I looked at the paper, I expected that as we have many powerful learning algorithms, they would rip through this challenge with ease, but surprisingly, I found out that even the easier self-play variant severely underperforms compared to the best human players and handcrafted bots. There is plenty of work to be done here, and luckily, you can also run it yourself at home and train some of these agents on a consumer graphics card. Note that it is possible to create a handcrafted program that plays this game well as we humans already know good strategies. However, this project is about getting several instances of an AI to learn new ways to communicate with each other effectively. Again, the goal is not to get a computer program that plays Hanabi well, the goal is to get an AI to learn to communicate effectively and work together towards a common goal. Much like chess, Starcraft 2, and Dota, Hanabi is still a proxy to be used for measuring progress in AI research. Nobody wants to spend millions of dollars to play card games at work, so the final goal of DeepMind is to reuse this algorithm for other applications where even we humans falter. I have included some more materials on this game in the video description, make sure to have a look. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zonaifahir."}, {"start": 4.28, "end": 11.36, "text": " Now get this, after defeating chess, Go and making incredible progress in Starcraft 2,"}, {"start": 11.36, "end": 17.2, "text": " scientists at DeepMind just published a paper where they claim that Hanabi is the next"}, {"start": 17.2, "end": 19.96, "text": " frontier in AI research."}, {"start": 19.96, "end": 22.32, "text": " And we shall stop right here."}, {"start": 22.32, "end": 28.04, "text": " I hear you asking me, Karo, after defeating all of these immensely difficult games, now"}, {"start": 28.04, "end": 32.76, "text": " you're trying to tell me that somehow, this silly card game is the next step?"}, {"start": 32.76, "end": 35.839999999999996, "text": " Yes, that's exactly what I'm saying."}, {"start": 35.839999999999996, "end": 36.84, "text": " Let me explain."}, {"start": 36.84, "end": 43.24, "text": " Hanabi is a card game where two to five players cooperate to build five card sequences"}, {"start": 43.24, "end": 48.28, "text": " and to do that, they are only allowed to exchange very little information."}, {"start": 48.28, "end": 52.92, "text": " This is also an imperfect information game, which means the players don't have all the"}, {"start": 52.92, "end": 56.120000000000005, "text": " knowledge available needed to make a good decision."}, {"start": 56.12, "end": 60.199999999999996, "text": " They have to work with what they have and try to infer the rest."}, {"start": 60.199999999999996, "end": 65.52, "text": " For instance, poker is also an imperfect information game because we don't see the cards of the"}, {"start": 65.52, "end": 70.8, "text": " other players and the game revolves around our guesses as to what they might have."}, {"start": 70.8, "end": 74.47999999999999, "text": " In Hanabi, interestingly, it is the other way around."}, {"start": 74.47999999999999, "end": 78.75999999999999, "text": " So we see the cards of the other players, but not our own ones."}, {"start": 78.75999999999999, "end": 84.24, "text": " The players have to work around this limitation by relying on each other and working out communication"}, {"start": 84.24, "end": 88.64, "text": " protocols and infer intent in order to win the game."}, {"start": 88.64, "end": 93.88, "text": " Like in many of the best games, these simple rules conceal a vast array of strategies, all"}, {"start": 93.88, "end": 98.24, "text": " of which are extremely hard to teach to current learning algorithms."}, {"start": 98.24, "end": 103.08, "text": " In the paper, a free and open source system is proposed to facilitate further research"}, {"start": 103.08, "end": 107.56, "text": " works and assess the performance of currently existing techniques."}, {"start": 107.56, "end": 113.03999999999999, "text": " The difficulty level of this game can also be made easier or harder at will from both"}, {"start": 113.04, "end": 116.04, "text": " inside and outside the game."}, {"start": 116.04, "end": 121.32000000000001, "text": " And by inside, I mean that we can set parameters like the number of allowed mistakes that can"}, {"start": 121.32000000000001, "end": 124.44000000000001, "text": " be made before the game is considered lost."}, {"start": 124.44000000000001, "end": 128.88, "text": " The outside part means that two main game settings are proposed."}, {"start": 128.88, "end": 135.64000000000001, "text": " One, self-play, this is the easier case where the AI plays with copies of itself, therefore"}, {"start": 135.64000000000001, "end": 142.04000000000002, "text": " it knows quite a bit about its teammates and two, ad hoc teams can also be constructed,"}, {"start": 142.04, "end": 147.95999999999998, "text": " which means that a set of agents need to cooperate that are not familiar with each other."}, {"start": 147.95999999999998, "end": 150.04, "text": " This is immensely difficult."}, {"start": 150.04, "end": 155.2, "text": " When I looked at the paper, I expected that as we have many powerful learning algorithms,"}, {"start": 155.2, "end": 160.07999999999998, "text": " they would rip through this challenge with ease, but surprisingly, I found out that even"}, {"start": 160.07999999999998, "end": 167.07999999999998, "text": " the easier self-play variant severely underperforms compared to the best human players and handcrafted"}, {"start": 167.07999999999998, "end": 168.07999999999998, "text": " bots."}, {"start": 168.08, "end": 172.96, "text": " There is plenty of work to be done here, and luckily, you can also run it yourself at"}, {"start": 172.96, "end": 177.36, "text": " home and train some of these agents on a consumer graphics card."}, {"start": 177.36, "end": 183.08, "text": " Note that it is possible to create a handcrafted program that plays this game well as we humans"}, {"start": 183.08, "end": 185.08, "text": " already know good strategies."}, {"start": 185.08, "end": 191.72000000000003, "text": " However, this project is about getting several instances of an AI to learn new ways to communicate"}, {"start": 191.72000000000003, "end": 193.32000000000002, "text": " with each other effectively."}, {"start": 193.32, "end": 199.28, "text": " Again, the goal is not to get a computer program that plays Hanabi well, the goal is to get"}, {"start": 199.28, "end": 205.35999999999999, "text": " an AI to learn to communicate effectively and work together towards a common goal."}, {"start": 205.35999999999999, "end": 211.35999999999999, "text": " Much like chess, Starcraft 2, and Dota, Hanabi is still a proxy to be used for measuring"}, {"start": 211.35999999999999, "end": 213.84, "text": " progress in AI research."}, {"start": 213.84, "end": 218.44, "text": " Nobody wants to spend millions of dollars to play card games at work, so the final goal"}, {"start": 218.44, "end": 225.07999999999998, "text": " of DeepMind is to reuse this algorithm for other applications where even we humans falter."}, {"start": 225.07999999999998, "end": 228.96, "text": " I have included some more materials on this game in the video description, make sure"}, {"start": 228.96, "end": 229.96, "text": " to have a look."}, {"start": 229.96, "end": 257.40000000000003, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=OV0ivJB2lyI
Liquid Splash Modeling With Neural Networks
❤️ This video has been kindly supported by my friends at Arm Research. Check them out here! - http://bit.ly/2TqOWAu 📝 The paper "Liquid Splash Modeling with Neural Networks" is available here: https://ge.in.tum.de/publications/2018-mlflip-um/ The Arm word and logo are trademarks of Arm Limited (or its subsidiaries) in the US and/or elsewhere. All rights reserved. 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-2300713/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Jolene Fahir. If you have been watching this series for a while, you know that I am completely addicted to fluid simulations, so it is now time for a new fluid paper. And by the end of this video, I hope you will be addicted too. If we create a virtual world with a solid block and use our knowledge from physics to implement the laws of fluid dynamics, this solid block will indeed start behaving like a fluid. A baseline simulation technique for this will be referred to as flip in the videos that you see here, and it stands for Fluid Implicit Particle. These simulations are often being used in the video game industry, in movies, and of course, I cannot resist to put some of them in my papers as test scenes as well. In games, we are typically looking for real-time simulations, and in this case, we can only get a relatively coarse resolution simulation that lacks fine details, such as droplet formation and splashing. For movies, we want the highest fidelity simulation possible with honey coiling, two-way interaction with other objects, wet sand simulations, and all of those goodies, however, these all take forever to compute. This is the Bane of fluid simulators. We have talked about a few earlier works that try to learn these laws via a neural network by feeding them a ton of video footage of these phenomena. This is absolutely amazing and is a true game changer for learning-based techniques. So why is that? Well, up until a few years ago, whenever we had a problem that was near impossible to solve with traditional techniques, we often reached out to a neural network or some other learning algorithm to solve it, often with success. However, it is not the case here. Something has changed. What has changed is that we can already solve these problems, but we can still make use of a neural network because it can help us with something that we can already do, but it does it faster and easier. However, some of these techniques for fluids are not yet as accurate as we would like, and therefore haven't yet seen widespread adoption. So here's an incredible idea. Why not compute a core simulation quickly that surely adheres to the laws of physics and then field the remaining details with a neural network? Again, flip is the baseline handcrafted technique, and you can see how the neural network infused simulation program on the left by the name ML Flip introduces these amazing details. And if we compare the results with the reference simulation, which took forever, you can see that it is quite similar and it indeed feels in the right kind of details. In case you are wondering about the training data, it learned the concept of splashes and droplets flying about, you guessed it right, by looking at splashes and droplets flying about. So now we know that it's quite accurate, and now the ultimate question is how fast is it? Well, get this, we can expect a 10 times speed up from this. So this basically means that for every 10 all-nighters, I have to wait for my simulations, I only have to wait one, and if something took only a few seconds, it now may be close to real time with this kind of visual fidelity. You know what? Sign me up. This video has been kindly supported by my friends at ARM Research. Make sure to check them out through the link in the video description. Thanks for watching, and for your generous support, I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Jolene Fahir."}, {"start": 4.5600000000000005, "end": 8.8, "text": " If you have been watching this series for a while, you know that I am completely addicted"}, {"start": 8.8, "end": 14.0, "text": " to fluid simulations, so it is now time for a new fluid paper."}, {"start": 14.0, "end": 17.72, "text": " And by the end of this video, I hope you will be addicted too."}, {"start": 17.72, "end": 23.16, "text": " If we create a virtual world with a solid block and use our knowledge from physics to implement"}, {"start": 23.16, "end": 29.080000000000002, "text": " the laws of fluid dynamics, this solid block will indeed start behaving like a fluid."}, {"start": 29.08, "end": 33.699999999999996, "text": " A baseline simulation technique for this will be referred to as flip in the videos that"}, {"start": 33.699999999999996, "end": 37.839999999999996, "text": " you see here, and it stands for Fluid Implicit Particle."}, {"start": 37.839999999999996, "end": 42.72, "text": " These simulations are often being used in the video game industry, in movies, and of"}, {"start": 42.72, "end": 48.32, "text": " course, I cannot resist to put some of them in my papers as test scenes as well."}, {"start": 48.32, "end": 53.4, "text": " In games, we are typically looking for real-time simulations, and in this case, we can only"}, {"start": 53.4, "end": 59.3, "text": " get a relatively coarse resolution simulation that lacks fine details, such as droplet"}, {"start": 59.3, "end": 61.879999999999995, "text": " formation and splashing."}, {"start": 61.879999999999995, "end": 67.4, "text": " For movies, we want the highest fidelity simulation possible with honey coiling, two-way"}, {"start": 67.4, "end": 73.84, "text": " interaction with other objects, wet sand simulations, and all of those goodies, however, these"}, {"start": 73.84, "end": 76.32, "text": " all take forever to compute."}, {"start": 76.32, "end": 78.88, "text": " This is the Bane of fluid simulators."}, {"start": 78.88, "end": 84.28, "text": " We have talked about a few earlier works that try to learn these laws via a neural network"}, {"start": 84.28, "end": 88.44, "text": " by feeding them a ton of video footage of these phenomena."}, {"start": 88.44, "end": 93.67999999999999, "text": " This is absolutely amazing and is a true game changer for learning-based techniques."}, {"start": 93.67999999999999, "end": 95.64, "text": " So why is that?"}, {"start": 95.64, "end": 101.0, "text": " Well, up until a few years ago, whenever we had a problem that was near impossible to"}, {"start": 101.0, "end": 106.03999999999999, "text": " solve with traditional techniques, we often reached out to a neural network or some other"}, {"start": 106.04, "end": 109.84, "text": " learning algorithm to solve it, often with success."}, {"start": 109.84, "end": 112.92, "text": " However, it is not the case here."}, {"start": 112.92, "end": 114.32000000000001, "text": " Something has changed."}, {"start": 114.32000000000001, "end": 119.72, "text": " What has changed is that we can already solve these problems, but we can still make use"}, {"start": 119.72, "end": 125.12, "text": " of a neural network because it can help us with something that we can already do, but"}, {"start": 125.12, "end": 127.64000000000001, "text": " it does it faster and easier."}, {"start": 127.64000000000001, "end": 132.88, "text": " However, some of these techniques for fluids are not yet as accurate as we would like,"}, {"start": 132.88, "end": 135.8, "text": " and therefore haven't yet seen widespread adoption."}, {"start": 135.8, "end": 137.96, "text": " So here's an incredible idea."}, {"start": 137.96, "end": 143.72, "text": " Why not compute a core simulation quickly that surely adheres to the laws of physics and"}, {"start": 143.72, "end": 147.52, "text": " then field the remaining details with a neural network?"}, {"start": 147.52, "end": 153.24, "text": " Again, flip is the baseline handcrafted technique, and you can see how the neural network"}, {"start": 153.24, "end": 158.84, "text": " infused simulation program on the left by the name ML Flip introduces these amazing"}, {"start": 158.84, "end": 161.96, "text": " details."}, {"start": 161.96, "end": 166.56, "text": " And if we compare the results with the reference simulation, which took forever, you can see"}, {"start": 166.56, "end": 171.92000000000002, "text": " that it is quite similar and it indeed feels in the right kind of details."}, {"start": 171.92000000000002, "end": 176.60000000000002, "text": " In case you are wondering about the training data, it learned the concept of splashes and"}, {"start": 176.60000000000002, "end": 182.0, "text": " droplets flying about, you guessed it right, by looking at splashes and droplets flying"}, {"start": 182.0, "end": 183.36, "text": " about."}, {"start": 183.36, "end": 189.12, "text": " So now we know that it's quite accurate, and now the ultimate question is how fast is"}, {"start": 189.12, "end": 190.12, "text": " it?"}, {"start": 190.12, "end": 195.48000000000002, "text": " Well, get this, we can expect a 10 times speed up from this."}, {"start": 195.48000000000002, "end": 201.4, "text": " So this basically means that for every 10 all-nighters, I have to wait for my simulations, I only"}, {"start": 201.4, "end": 207.0, "text": " have to wait one, and if something took only a few seconds, it now may be close to real"}, {"start": 207.0, "end": 210.16, "text": " time with this kind of visual fidelity."}, {"start": 210.16, "end": 211.16, "text": " You know what?"}, {"start": 211.16, "end": 212.32, "text": " Sign me up."}, {"start": 212.32, "end": 216.0, "text": " This video has been kindly supported by my friends at ARM Research."}, {"start": 216.0, "end": 218.92000000000002, "text": " Make sure to check them out through the link in the video description."}, {"start": 218.92, "end": 222.88, "text": " Thanks for watching, and for your generous support, I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=iM4PPGDQry0
GANPaint: An Extraordinary Image Editor AI
📝 The paper " GAN Dissection: Visualizing and Understanding Generative Adversarial Networks " and its web demo is available here: https://gandissect.csail.mit.edu http://gandissect.res.ibm.com/ganpaint.html ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #GANPaint
Dear Fellow Scholars, this is Dominic Papers with Kato Joana Ifahir. This paper describes a new technique to visualize the inner workings of a generator neural network. This is a neural network that is able to create images for us. The key idea here is dissecting this neural network and looking for agreements between a set of neurons and concepts in the output image, such as trees, sky, clouds, and more. This means analyzing that these neurons are responsible for buildings to appear in the image and those will generate clouds. Interestingly, such agreements can be found, which means way more than just creating visualizations like this, because it enables us to edit images without any artistic skills. And now, hold on to your papers. The editing part works by forcefully activating and deactivating these units and correspond to adding or removing these objects from an image. And look, this means that we can take an already existing image and ask this technique to remove trees from it, or perhaps add more, the same with domes, doors, and more. Wow, this is pretty cool, but you haven't seen the best part yet. Note that so far, the amount of control we have over the image is quite limited. Unfortunately, we can take this further and select a region of the image where we wish to add something new. This is suddenly so much more granular and useful. The algorithm seems to understand that trees need to be rooted somewhere and not just appear from thin air. Most of the time anyway. Interestingly, it also understands that bricks don't really belong here, but if I add it to the side of the building, it continues in a way that is consistent with its appearance. Most of the time anyway. And of course, it is not perfect. Here, you can see me struggling with this spaghetti monster floating in the air that used to be a tree and it just refuses to be overwritten. And this is a very important lesson. Most research works are by the step in a thousand mile journey and each of them tries to improve upon the previous paper. This means that a few more papers down the line, this will probably take place in HD, perhaps in real time, and with much higher quality. This work also builds on previous knowledge, on generative adversarial networks, and whatever the follow-up papers will contain, they will build on knowledge that was found in this work. Welcome to the wonderful world of research. And now, we can all rejoice because the authors kindly made the source code available free for everyone, and not only that, but there is also a web app, so you can also try it yourself. This is an excellent way of maximizing the impact of your research work. Let the experts improve upon it by releasing the source code and let people play with it, even layman. You will also find many failure cases, but also cases where it works well, and I think there is value in reporting both, so we learn a little more about this amazing algorithm. So, let's do a little research together. Make sure to post your results in the comment section. I have a feeling that lots of high quality entertainment materials will surface very soon. I bet the authors will also be grateful for the feedback as well. So, let the experiments begin. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is Dominic Papers with Kato Joana Ifahir."}, {"start": 4.28, "end": 10.48, "text": " This paper describes a new technique to visualize the inner workings of a generator neural network."}, {"start": 10.48, "end": 14.4, "text": " This is a neural network that is able to create images for us."}, {"start": 14.4, "end": 21.16, "text": " The key idea here is dissecting this neural network and looking for agreements between a set of neurons"}, {"start": 21.16, "end": 27.44, "text": " and concepts in the output image, such as trees, sky, clouds, and more."}, {"start": 27.44, "end": 32.44, "text": " This means analyzing that these neurons are responsible for buildings to appear in the image"}, {"start": 32.44, "end": 34.64, "text": " and those will generate clouds."}, {"start": 34.64, "end": 41.84, "text": " Interestingly, such agreements can be found, which means way more than just creating visualizations like this,"}, {"start": 41.84, "end": 47.040000000000006, "text": " because it enables us to edit images without any artistic skills."}, {"start": 47.040000000000006, "end": 49.32, "text": " And now, hold on to your papers."}, {"start": 49.32, "end": 54.84, "text": " The editing part works by forcefully activating and deactivating these units"}, {"start": 54.84, "end": 59.440000000000005, "text": " and correspond to adding or removing these objects from an image."}, {"start": 59.440000000000005, "end": 63.32000000000001, "text": " And look, this means that we can take an already existing image"}, {"start": 63.32000000000001, "end": 66.28, "text": " and ask this technique to remove trees from it,"}, {"start": 66.28, "end": 71.96000000000001, "text": " or perhaps add more, the same with domes, doors, and more."}, {"start": 71.96000000000001, "end": 76.84, "text": " Wow, this is pretty cool, but you haven't seen the best part yet."}, {"start": 76.84, "end": 82.36, "text": " Note that so far, the amount of control we have over the image is quite limited."}, {"start": 82.36, "end": 86.96, "text": " Unfortunately, we can take this further and select a region of the image"}, {"start": 86.96, "end": 89.16, "text": " where we wish to add something new."}, {"start": 89.16, "end": 92.96, "text": " This is suddenly so much more granular and useful."}, {"start": 92.96, "end": 97.36, "text": " The algorithm seems to understand that trees need to be rooted somewhere"}, {"start": 97.36, "end": 99.76, "text": " and not just appear from thin air."}, {"start": 99.76, "end": 101.56, "text": " Most of the time anyway."}, {"start": 101.56, "end": 106.36, "text": " Interestingly, it also understands that bricks don't really belong here,"}, {"start": 106.36, "end": 108.76, "text": " but if I add it to the side of the building,"}, {"start": 108.76, "end": 112.76, "text": " it continues in a way that is consistent with its appearance."}, {"start": 112.76, "end": 114.76, "text": " Most of the time anyway."}, {"start": 114.76, "end": 116.96000000000001, "text": " And of course, it is not perfect."}, {"start": 116.96000000000001, "end": 121.56, "text": " Here, you can see me struggling with this spaghetti monster floating in the air"}, {"start": 121.56, "end": 125.76, "text": " that used to be a tree and it just refuses to be overwritten."}, {"start": 125.76, "end": 128.16, "text": " And this is a very important lesson."}, {"start": 128.16, "end": 132.16, "text": " Most research works are by the step in a thousand mile journey"}, {"start": 132.16, "end": 135.96, "text": " and each of them tries to improve upon the previous paper."}, {"start": 135.96, "end": 138.76000000000002, "text": " This means that a few more papers down the line,"}, {"start": 138.76000000000002, "end": 142.96, "text": " this will probably take place in HD, perhaps in real time,"}, {"start": 142.96, "end": 145.36, "text": " and with much higher quality."}, {"start": 145.36, "end": 148.16, "text": " This work also builds on previous knowledge,"}, {"start": 148.16, "end": 150.36, "text": " on generative adversarial networks,"}, {"start": 150.36, "end": 153.16, "text": " and whatever the follow-up papers will contain,"}, {"start": 153.16, "end": 157.16, "text": " they will build on knowledge that was found in this work."}, {"start": 157.16, "end": 160.16, "text": " Welcome to the wonderful world of research."}, {"start": 160.16, "end": 164.56, "text": " And now, we can all rejoice because the authors kindly made the source code"}, {"start": 164.56, "end": 167.96, "text": " available free for everyone, and not only that,"}, {"start": 167.96, "end": 171.96, "text": " but there is also a web app, so you can also try it yourself."}, {"start": 171.96, "end": 176.36, "text": " This is an excellent way of maximizing the impact of your research work."}, {"start": 176.36, "end": 179.96, "text": " Let the experts improve upon it by releasing the source code"}, {"start": 179.96, "end": 183.16, "text": " and let people play with it, even layman."}, {"start": 183.16, "end": 185.56, "text": " You will also find many failure cases,"}, {"start": 185.56, "end": 187.76, "text": " but also cases where it works well,"}, {"start": 187.76, "end": 190.56, "text": " and I think there is value in reporting both,"}, {"start": 190.56, "end": 193.96, "text": " so we learn a little more about this amazing algorithm."}, {"start": 193.96, "end": 196.56, "text": " So, let's do a little research together."}, {"start": 196.56, "end": 199.36, "text": " Make sure to post your results in the comment section."}, {"start": 199.36, "end": 203.36, "text": " I have a feeling that lots of high quality entertainment materials"}, {"start": 203.36, "end": 205.16, "text": " will surface very soon."}, {"start": 205.16, "end": 208.96, "text": " I bet the authors will also be grateful for the feedback as well."}, {"start": 208.96, "end": 211.56, "text": " So, let the experiments begin."}, {"start": 211.56, "end": 213.76000000000002, "text": " Thanks for watching and for your generous support,"}, {"start": 213.76, "end": 224.35999999999999, "text": " and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=QpptSohzuDo
This Experiment Questions Some Recent AI Results
Audible - get Nick Bostrom's "Superintelligence" for free: US: https://amzn.to/2RXr32F EU: https://amzn.to/2SqauwI 📝 The paper "Approximating CNNs with Bag-of-local-Features models works surprisingly well on ImageNet " is available here: https://openreview.net/pdf?id=SkfMWhAqYQ https://openreview.net/forum?id=SkfMWhAqYQ Bag of words image sources: https://people.csail.mit.edu/torralba/shortCourseRLOC/ 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-3081803/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. In the previous episode, we talked about image classification, which means that we have an image as an input, and we ask a computer to figure out what is seen in this image. Learning algorithms, such as convolution on your own networks, are amazing at it. However, we just found out that even though their results are excellent, it is still quite hard to find out how they get to make a decision that an image depicts a dog or a cat. This is in contrast to an old and simple technique that goes by the name Bag of Words. It works a bit like looking for keywords in a document, and by using those, trying to find out what the writing is about. Kind of like the shortcut students like to take for mandatory readings. We have all done it. Now, imagine the same for images where we slice up the image into small pieces and keep a score on what is seen in these snippets. Floppy ears, blacks now, fur. Okay, we're good. We can conclude that we have a dog over here. But wait, I hear what you are saying. Károly, why do we need to digress from AI to Bag of Words? Why talk about this old method? Well, let's look at the advantages and disadvantages and you will see in a moment. The advantage of Bag of Features is that it is quite easy to interpret because it is an open book. It gives us the scores for all of these small snippets. We know exactly how a decision is being made. A disadvantage, one would say, is that because it works per snippet, it ignores the bigger spatial relationships in an image and therefore overall, it must be vastly inferior to an neural network. Right? Well, let's set up an experiment and see. This is a paper from the same group as the previous episode at the University of Tobingo. The experiment works the following way. Let's try to combine Bag of Words with neural networks by slicing up the image into the same patches and then feed them into a neural network and ask it to classify them. In this case, the neural network will do many small classification tasks on image snippets instead of one big decision for the full image. The paper discusses that the final classification also involves evaluating heat maps and more. This way, we are hoping that we get a technique where a neural network would explain its decisions much like how Bag of Words works. For now, let's call these networks, bagnets. And now, hold on to your papers because the results are really surprising. As expected, it is true that looking at small snippets of the image can lead to misunderstandings. For instance, this image contains a soccer ball, but when zooming into small patches, it might seem like this is a cowboy hat on top of the head of this child. However, what is unexpected is that even with this, Bagnet produces surprisingly similar results to a state of the art neural network by the name ResNet. This is… Wow. This has several corollaries. Let's start with the cool one. This means that neural networks are great at identifying objects in scrambled images, but humans are not. The reason for that is that the order of the task don't really matter. We now have a better reason why this is the case, doing all this classification for many small tasks independently has superpowers when it comes to processing scrambled images. The other more controversial corollary is that this inevitably means that some results that show the superiority of deep neural networks over the good old bag of features come not from using a superior method, but from careful fine tuning. Not all results, some results. As always, a good piece of research challenges are underlying assumptions and sometimes, in this case, even our sanity. There's a lot to say about this topic and we have only scratched the surface, so take this as a thought-provoking idea that is worthy of further discussion. Really cool work. I love it. This video has been supported by Audible. By using Audible, you get two excellent audiobooks, free of charge. I recommend that you click the link in the video description, sign up for free and check out the book Super Intelligence by Nick Pastrum. Some more AI for you, whenever you are stuck in traffic, or have to clean the house. I talked about this book earlier and I see that many of you fellow scholars have been enjoying it. If you haven't read it, make sure to sign up now because this book discusses how it could be possible to build a super intelligent AI and what such an all-knowing being would be like. You get this book free of charge and you can cancel at any time. You can't go wrong with this. Head on to the video description and sign up under the appropriate links. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.5200000000000005, "end": 9.28, "text": " In the previous episode, we talked about image classification, which means that we have an"}, {"start": 9.28, "end": 15.4, "text": " image as an input, and we ask a computer to figure out what is seen in this image."}, {"start": 15.4, "end": 20.080000000000002, "text": " Learning algorithms, such as convolution on your own networks, are amazing at it."}, {"start": 20.080000000000002, "end": 25.34, "text": " However, we just found out that even though their results are excellent, it is still quite"}, {"start": 25.34, "end": 31.58, "text": " hard to find out how they get to make a decision that an image depicts a dog or a cat."}, {"start": 31.58, "end": 37.14, "text": " This is in contrast to an old and simple technique that goes by the name Bag of Words."}, {"start": 37.14, "end": 42.019999999999996, "text": " It works a bit like looking for keywords in a document, and by using those, trying to"}, {"start": 42.019999999999996, "end": 45.019999999999996, "text": " find out what the writing is about."}, {"start": 45.019999999999996, "end": 49.14, "text": " Kind of like the shortcut students like to take for mandatory readings."}, {"start": 49.14, "end": 50.58, "text": " We have all done it."}, {"start": 50.58, "end": 56.739999999999995, "text": " Now, imagine the same for images where we slice up the image into small pieces and keep"}, {"start": 56.739999999999995, "end": 60.06, "text": " a score on what is seen in these snippets."}, {"start": 60.06, "end": 62.66, "text": " Floppy ears, blacks now, fur."}, {"start": 62.66, "end": 64.34, "text": " Okay, we're good."}, {"start": 64.34, "end": 67.14, "text": " We can conclude that we have a dog over here."}, {"start": 67.14, "end": 69.38, "text": " But wait, I hear what you are saying."}, {"start": 69.38, "end": 74.02, "text": " K\u00e1roly, why do we need to digress from AI to Bag of Words?"}, {"start": 74.02, "end": 76.06, "text": " Why talk about this old method?"}, {"start": 76.06, "end": 81.58, "text": " Well, let's look at the advantages and disadvantages and you will see in a moment."}, {"start": 81.58, "end": 86.34, "text": " The advantage of Bag of Features is that it is quite easy to interpret because it is"}, {"start": 86.34, "end": 87.58, "text": " an open book."}, {"start": 87.58, "end": 90.82000000000001, "text": " It gives us the scores for all of these small snippets."}, {"start": 90.82000000000001, "end": 94.42, "text": " We know exactly how a decision is being made."}, {"start": 94.42, "end": 99.82000000000001, "text": " A disadvantage, one would say, is that because it works per snippet, it ignores the bigger"}, {"start": 99.82000000000001, "end": 106.02000000000001, "text": " spatial relationships in an image and therefore overall, it must be vastly inferior to an"}, {"start": 106.02, "end": 107.02, "text": " neural network."}, {"start": 107.02, "end": 108.02, "text": " Right?"}, {"start": 108.02, "end": 111.78, "text": " Well, let's set up an experiment and see."}, {"start": 111.78, "end": 116.82, "text": " This is a paper from the same group as the previous episode at the University of Tobingo."}, {"start": 116.82, "end": 119.22, "text": " The experiment works the following way."}, {"start": 119.22, "end": 124.06, "text": " Let's try to combine Bag of Words with neural networks by slicing up the image into the"}, {"start": 124.06, "end": 130.74, "text": " same patches and then feed them into a neural network and ask it to classify them."}, {"start": 130.74, "end": 135.85999999999999, "text": " In this case, the neural network will do many small classification tasks on image snippets"}, {"start": 135.86, "end": 139.74, "text": " instead of one big decision for the full image."}, {"start": 139.74, "end": 145.86, "text": " The paper discusses that the final classification also involves evaluating heat maps and more."}, {"start": 145.86, "end": 151.26000000000002, "text": " This way, we are hoping that we get a technique where a neural network would explain its decisions"}, {"start": 151.26000000000002, "end": 154.26000000000002, "text": " much like how Bag of Words works."}, {"start": 154.26000000000002, "end": 157.62, "text": " For now, let's call these networks, bagnets."}, {"start": 157.62, "end": 162.3, "text": " And now, hold on to your papers because the results are really surprising."}, {"start": 162.3, "end": 168.14000000000001, "text": " As expected, it is true that looking at small snippets of the image can lead to misunderstandings."}, {"start": 168.14000000000001, "end": 173.22, "text": " For instance, this image contains a soccer ball, but when zooming into small patches, it"}, {"start": 173.22, "end": 178.42000000000002, "text": " might seem like this is a cowboy hat on top of the head of this child."}, {"start": 178.42000000000002, "end": 184.58, "text": " However, what is unexpected is that even with this, Bagnet produces surprisingly similar"}, {"start": 184.58, "end": 189.9, "text": " results to a state of the art neural network by the name ResNet."}, {"start": 189.9, "end": 190.9, "text": " This is\u2026"}, {"start": 190.9, "end": 191.9, "text": " Wow."}, {"start": 191.9, "end": 194.02, "text": " This has several corollaries."}, {"start": 194.02, "end": 195.74, "text": " Let's start with the cool one."}, {"start": 195.74, "end": 200.98000000000002, "text": " This means that neural networks are great at identifying objects in scrambled images,"}, {"start": 200.98000000000002, "end": 202.5, "text": " but humans are not."}, {"start": 202.5, "end": 206.82, "text": " The reason for that is that the order of the task don't really matter."}, {"start": 206.82, "end": 212.06, "text": " We now have a better reason why this is the case, doing all this classification for many"}, {"start": 212.06, "end": 218.46, "text": " small tasks independently has superpowers when it comes to processing scrambled images."}, {"start": 218.46, "end": 223.74, "text": " The other more controversial corollary is that this inevitably means that some results"}, {"start": 223.74, "end": 229.5, "text": " that show the superiority of deep neural networks over the good old bag of features come"}, {"start": 229.5, "end": 235.22, "text": " not from using a superior method, but from careful fine tuning."}, {"start": 235.22, "end": 238.18, "text": " Not all results, some results."}, {"start": 238.18, "end": 243.94, "text": " As always, a good piece of research challenges are underlying assumptions and sometimes, in"}, {"start": 243.94, "end": 246.06, "text": " this case, even our sanity."}, {"start": 246.06, "end": 250.7, "text": " There's a lot to say about this topic and we have only scratched the surface, so take"}, {"start": 250.7, "end": 255.66, "text": " this as a thought-provoking idea that is worthy of further discussion."}, {"start": 255.66, "end": 256.66, "text": " Really cool work."}, {"start": 256.66, "end": 257.86, "text": " I love it."}, {"start": 257.86, "end": 260.3, "text": " This video has been supported by Audible."}, {"start": 260.3, "end": 264.74, "text": " By using Audible, you get two excellent audiobooks, free of charge."}, {"start": 264.74, "end": 269.7, "text": " I recommend that you click the link in the video description, sign up for free and check"}, {"start": 269.7, "end": 273.26, "text": " out the book Super Intelligence by Nick Pastrum."}, {"start": 273.26, "end": 278.09999999999997, "text": " Some more AI for you, whenever you are stuck in traffic, or have to clean the house."}, {"start": 278.09999999999997, "end": 282.94, "text": " I talked about this book earlier and I see that many of you fellow scholars have been enjoying"}, {"start": 282.94, "end": 283.94, "text": " it."}, {"start": 283.94, "end": 288.7, "text": " If you haven't read it, make sure to sign up now because this book discusses how it could"}, {"start": 288.7, "end": 293.82, "text": " be possible to build a super intelligent AI and what such an all-knowing being would"}, {"start": 293.82, "end": 294.98, "text": " be like."}, {"start": 294.98, "end": 298.9, "text": " You get this book free of charge and you can cancel at any time."}, {"start": 298.9, "end": 300.58, "text": " You can't go wrong with this."}, {"start": 300.58, "end": 304.18, "text": " Head on to the video description and sign up under the appropriate links."}, {"start": 304.18, "end": 333.74, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=YFL-MI5xzgg
Do Neural Networks Need To Think Like Humans?
❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 📝 The paper "ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness " is available here: https://openreview.net/forum?id=Bygh9j09KX https://blog.usejournal.com/why-deep-learning-works-differently-than-we-thought-ec28823bdbc https://github.com/rgeirhos/texture-vs-shape Neural network visualization footage source: https://www.youtube.com/watch?v=1zvohULpe_0 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir. As convolutional neural network-based image classifiers are able to correctly identify objects and images and are getting more and more pervasive scientists at the University of Tobingen decided to embark on a project to learn more about the inner workings of these networks. Their key question was whether they really work similarly to humans or not. Now, one way of doing this is visualizing the inner workings of the neural network. This is a research field on its own. I tried to report on it to you every now and then and we talked about some damn good papers on this with more to come. A different way would be to disregard the inner workings of the neural network in other words to treat it like a black box at least temporarily. But what does this mean exactly? Let's have a look at an example. And in this example, our test subject shall be none other than this cat. Here we have a bunch of neural networks that have been trained on the classical image net data set and a set of humans. This cat is successfully identified by all classical neural network architectures and most humans. Now onwards to a grayscale version of the same cat. The neural networks are still quite confident that this is a cat, some humans faltered, but still nothing too crazy going on here. Now let's look at the silhouette of the cat. Whoa! Suddenly humans are doing much better at identifying the cat than neural networks. This is even more so true when we are only given the edges of the image. However, when looking at a heavily zoomed in image of the texture of an Indian elephant, neural networks are very confident with their correct guess where some humans falter. Ha! We have a lead here. It may be that as opposed to humans, neural networks think more in terms of textures than shapes. Let's test that hypothesis. Step number one, Indian elephant. This is correctly identified. Now, cat. Again, correctly identified. And now, hold on to your papers, a cat with an elephant texture. And there we go. A cat with an elephant texture is still a cat to us humans, but is an elephant to convolution on your own networks. After looking some more at the problem, they found that the most common convolution on your own network architectures that were trained on the image net dataset vastly over value textures over shapes. That is fundamentally different to how we humans think. So, can we try to remedy this problem? Is this even a problem at all? Neural networks need not to think like humans, but who knows its research? We might find something useful along the way. So, how could we create a dataset that would teach a neural network a better understanding of shapes? Well, that's a great question and one possible answer is, style transfer. Let me explain. Style transfer is the process of fusing together two images where the content of one image and the style of the other is taken. So now, let's take the image net dataset and run style transfer on each of these images. This is useful because it repaints the textures, but the shapes are mostly left intact. The authors call it the stylized image net dataset and have made it publicly available for everyone. This new dataset will no doubt coerce the neural network to build a better understanding of shapes which will bring it closer to human thinking. We don't know if that is a good thing yet, so let's look at the results. And here comes the surprise. When training a neural network architecture by the name ResNet50, jointly on the regular and stylized image net dataset, after a little fine tuning, they have found two remarkable things. One, the resulting neural network now sees more similarly to humans. The old blue squares on the right mean that the old thinking is texture-based, but the new neural networks denoted with the orange squares are now much closer to the shape-based thinking of humans, which is indicated with the red circles. And now, hold on to your papers because two, the new neural network also outperforms the old ones in terms of accuracy. Dear Fellow Scholars, this is research at its finest. The authors explored an interesting idea and look where they ended up. Amazing. If you enjoyed this episode and you feel that a bunch of these videos a month are worth $3, please consider supporting us on Patreon. This helps us get more independent and create better videos for you. You can find us at patreon.com slash two-minute papers. Or just click the link in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zonai Fahir."}, {"start": 4.4, "end": 10.200000000000001, "text": " As convolutional neural network-based image classifiers are able to correctly identify objects"}, {"start": 10.200000000000001, "end": 16.6, "text": " and images and are getting more and more pervasive scientists at the University of Tobingen decided"}, {"start": 16.6, "end": 22.2, "text": " to embark on a project to learn more about the inner workings of these networks."}, {"start": 22.2, "end": 27.0, "text": " Their key question was whether they really work similarly to humans or not."}, {"start": 27.0, "end": 32.4, "text": " Now, one way of doing this is visualizing the inner workings of the neural network."}, {"start": 32.4, "end": 37.2, "text": " This is a research field on its own. I tried to report on it to you every now and then"}, {"start": 37.2, "end": 41.2, "text": " and we talked about some damn good papers on this with more to come."}, {"start": 41.2, "end": 45.400000000000006, "text": " A different way would be to disregard the inner workings of the neural network"}, {"start": 45.400000000000006, "end": 50.2, "text": " in other words to treat it like a black box at least temporarily."}, {"start": 50.2, "end": 54.0, "text": " But what does this mean exactly? Let's have a look at an example."}, {"start": 54.0, "end": 59.0, "text": " And in this example, our test subject shall be none other than this cat."}, {"start": 59.0, "end": 64.6, "text": " Here we have a bunch of neural networks that have been trained on the classical image net data set"}, {"start": 64.6, "end": 66.6, "text": " and a set of humans."}, {"start": 66.6, "end": 73.0, "text": " This cat is successfully identified by all classical neural network architectures and most humans."}, {"start": 73.0, "end": 77.8, "text": " Now onwards to a grayscale version of the same cat."}, {"start": 77.8, "end": 83.0, "text": " The neural networks are still quite confident that this is a cat, some humans faltered,"}, {"start": 83.0, "end": 86.0, "text": " but still nothing too crazy going on here."}, {"start": 86.0, "end": 89.0, "text": " Now let's look at the silhouette of the cat."}, {"start": 89.0, "end": 96.6, "text": " Whoa! Suddenly humans are doing much better at identifying the cat than neural networks."}, {"start": 96.6, "end": 101.6, "text": " This is even more so true when we are only given the edges of the image."}, {"start": 101.6, "end": 107.4, "text": " However, when looking at a heavily zoomed in image of the texture of an Indian elephant,"}, {"start": 107.4, "end": 112.8, "text": " neural networks are very confident with their correct guess where some humans falter."}, {"start": 112.8, "end": 115.6, "text": " Ha! We have a lead here."}, {"start": 115.6, "end": 122.6, "text": " It may be that as opposed to humans, neural networks think more in terms of textures than shapes."}, {"start": 122.6, "end": 124.8, "text": " Let's test that hypothesis."}, {"start": 124.8, "end": 127.6, "text": " Step number one, Indian elephant."}, {"start": 127.6, "end": 129.8, "text": " This is correctly identified."}, {"start": 129.8, "end": 131.8, "text": " Now, cat."}, {"start": 131.8, "end": 134.8, "text": " Again, correctly identified."}, {"start": 134.8, "end": 140.6, "text": " And now, hold on to your papers, a cat with an elephant texture."}, {"start": 140.6, "end": 146.2, "text": " And there we go. A cat with an elephant texture is still a cat to us humans,"}, {"start": 146.2, "end": 150.2, "text": " but is an elephant to convolution on your own networks."}, {"start": 150.2, "end": 155.0, "text": " After looking some more at the problem, they found that the most common convolution on your"}, {"start": 155.0, "end": 161.79999999999998, "text": " own network architectures that were trained on the image net dataset vastly over value textures over shapes."}, {"start": 161.79999999999998, "end": 166.0, "text": " That is fundamentally different to how we humans think."}, {"start": 166.0, "end": 169.0, "text": " So, can we try to remedy this problem?"}, {"start": 169.0, "end": 171.0, "text": " Is this even a problem at all?"}, {"start": 171.0, "end": 175.6, "text": " Neural networks need not to think like humans, but who knows its research?"}, {"start": 175.6, "end": 178.2, "text": " We might find something useful along the way."}, {"start": 178.2, "end": 185.2, "text": " So, how could we create a dataset that would teach a neural network a better understanding of shapes?"}, {"start": 185.2, "end": 190.6, "text": " Well, that's a great question and one possible answer is, style transfer."}, {"start": 190.6, "end": 191.8, "text": " Let me explain."}, {"start": 191.8, "end": 195.8, "text": " Style transfer is the process of fusing together two images"}, {"start": 195.8, "end": 200.60000000000002, "text": " where the content of one image and the style of the other is taken."}, {"start": 200.60000000000002, "end": 207.0, "text": " So now, let's take the image net dataset and run style transfer on each of these images."}, {"start": 207.0, "end": 212.8, "text": " This is useful because it repaints the textures, but the shapes are mostly left intact."}, {"start": 212.8, "end": 219.20000000000002, "text": " The authors call it the stylized image net dataset and have made it publicly available for everyone."}, {"start": 219.20000000000002, "end": 225.4, "text": " This new dataset will no doubt coerce the neural network to build a better understanding of shapes"}, {"start": 225.4, "end": 228.20000000000002, "text": " which will bring it closer to human thinking."}, {"start": 228.20000000000002, "end": 232.6, "text": " We don't know if that is a good thing yet, so let's look at the results."}, {"start": 232.6, "end": 235.0, "text": " And here comes the surprise."}, {"start": 235.0, "end": 239.4, "text": " When training a neural network architecture by the name ResNet50,"}, {"start": 239.4, "end": 243.6, "text": " jointly on the regular and stylized image net dataset,"}, {"start": 243.6, "end": 248.4, "text": " after a little fine tuning, they have found two remarkable things."}, {"start": 248.4, "end": 253.8, "text": " One, the resulting neural network now sees more similarly to humans."}, {"start": 253.8, "end": 259.0, "text": " The old blue squares on the right mean that the old thinking is texture-based,"}, {"start": 259.0, "end": 266.40000000000003, "text": " but the new neural networks denoted with the orange squares are now much closer to the shape-based thinking of humans,"}, {"start": 266.40000000000003, "end": 269.0, "text": " which is indicated with the red circles."}, {"start": 271.0, "end": 274.40000000000003, "text": " And now, hold on to your papers because two,"}, {"start": 274.40000000000003, "end": 280.0, "text": " the new neural network also outperforms the old ones in terms of accuracy."}, {"start": 280.0, "end": 284.2, "text": " Dear Fellow Scholars, this is research at its finest."}, {"start": 284.2, "end": 288.6, "text": " The authors explored an interesting idea and look where they ended up."}, {"start": 288.6, "end": 289.8, "text": " Amazing."}, {"start": 289.8, "end": 295.4, "text": " If you enjoyed this episode and you feel that a bunch of these videos a month are worth $3,"}, {"start": 295.4, "end": 298.0, "text": " please consider supporting us on Patreon."}, {"start": 298.0, "end": 301.8, "text": " This helps us get more independent and create better videos for you."}, {"start": 301.8, "end": 305.8, "text": " You can find us at patreon.com slash two-minute papers."}, {"start": 305.8, "end": 308.8, "text": " Or just click the link in the video description."}, {"start": 308.8, "end": 313.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=pv8Sl2rWyCQ
Google AI's Take on How To Fix Peer Review
📝 The paper "Avoiding a Tragedy of the Commons in the Peer Review Process" is available here: https://arxiv.org/abs/1901.06246 The NeurIPS experiment: http://blog.mrtz.org/2014/12/15/the-nips-experiment.html Fluid video source: https://www.youtube.com/watch?v=MCHw6fUyLMY ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/images/id-3653385/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. It is time for a position paper. This paper does not have the usual visual fireworks that you see in many of these videos, however, it addresses the cornerstone of scientific publication which is none other than peer review. When a research group is done with the project, they don't just write up the results and check the paper in a repository, but instead they submit it to a scientific venue, for instance, a journal or a conference. Then the venue finds several other researchers who are willing to go through the work with a fine tooth comb. In the case of double blind reviews, both the authors and the reviewers remain anonymous to each other. The reviewers now check whether the results are indeed significant, novel, credible and reproducible. If the venue is really good, this process is very tough and thorough, and this process becomes the scientific version of beating the heck out of someone but in a constructive manner. If the work is able to withstand serious criticism and takes the required boxes, it can proceed to get published at this venue. Otherwise, it is rejected. So what we heard so far is that the research work is being reviewed, however, scientists at the Google AI lab raised the issue that the reviewers themselves should also be reviewed. Consider the fact that all scientists are expected to spend a certain percentage of their time to serve the greater good. For instance, throughout my PhD studies, I have reviewed over 30 papers and I am not even done yet. These paper reviews take place without compensation. Let's call this issue number one for now. issue number two is the explosive growth of the number of submissions over time at the most prestigious machine learning and computer vision conferences. Have a look here. It is of utmost importance that we create a review system that is as fair as possible, after all thousands of hours spent on research projects are at stake. At these two issues together and we get a system where the average quality of the reviews will almost certainly decrease over time. Quoting the authors, we believe the key issues here are structural. Reviewers donate their valuable time and expertise anonymously as a service to the community with no compensation or attribution are increasingly taxed by a rapidly increasing number of submissions and are held to no-end force standards. In two-minute papers episode number 84, so more than 200 episodes ago, we discussed the Nurebs experiment. Leave a comment if you have been around back then and you enjoyed two-minute papers before it was cool. But don't worry if this is not the case, this was long ago, so here's a short summary. A large number of papers were secretly disseminated to multiple committees who would review it without knowing about each other and we would have a look whether they would accept or reject the same papers. Re-review papers and see if the results are the same, if you will. If we use sophisticated mathematics to create new scientific methods, why not use mathematics to evaluate our own processes? So after doing that, it was found that at a given prescribed acceptance ratio, there was a disagreement for 57% of the papers. So is this number good or bad? Let's imagine a completely hypothetical committee that has no idea what they are doing and as a review, they basically toss up a coin and accept or reject the paper based on the result of the coin toss. Let's call them the CoinFlip Committee. The calculations conclude that the CoinFlip Committee would have a disagreement ratio of about 77%. So experts, 57% disagreement, CoinFlip Committee, 77% disagreement. And now to answer whether this is good or bad, this is hardly something to be proud of. The consistency of expert reviewers is significantly closer to a coin flip than to a hypothetical perfect review process. If that is not an indication that we have to do something about this, I am not sure what is. So in this paper, the authors propose two important changes to the system to remedy these issues. Remedy number one, they propose a rubric, a seven-point document to evaluate the quality of the reviews. Again, not only the papers are reviewed, but the reviews themselves. It is similar to the ones used in public schools to evaluate student performance to make sure whether the review was objective, consistent, and fair. Remedy number two, reviewers should be incentivized and rewarded for their work. The authors argue that a professional service should be worthy of professional compensation. Now, of course, this sounds great, but this also requires money. Where should the funds come from? The paper discusses several options. For instance, this could be funded through sponsorships, or asking for a reasonable fee when submitting a paper for peer review and introducing a new fee structure for science conferences. This is a short, five-page paper that is easily readable for everyone, raises excellent points for a very important problem, so needless to say, I highly recommend that you give it a read. As always, the link is in the video description. I hope this video will help raising more awareness to this problem. If we are to create a fair system for evaluating research papers, we better get this right. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.46, "end": 7.2, "text": " It is time for a position paper."}, {"start": 7.2, "end": 12.120000000000001, "text": " This paper does not have the usual visual fireworks that you see in many of these videos,"}, {"start": 12.120000000000001, "end": 18.0, "text": " however, it addresses the cornerstone of scientific publication which is none other than peer"}, {"start": 18.0, "end": 19.0, "text": " review."}, {"start": 19.0, "end": 23.2, "text": " When a research group is done with the project, they don't just write up the results and"}, {"start": 23.2, "end": 30.0, "text": " check the paper in a repository, but instead they submit it to a scientific venue, for instance,"}, {"start": 30.0, "end": 32.76, "text": " a journal or a conference."}, {"start": 32.76, "end": 37.519999999999996, "text": " Then the venue finds several other researchers who are willing to go through the work with"}, {"start": 37.519999999999996, "end": 39.32, "text": " a fine tooth comb."}, {"start": 39.32, "end": 44.879999999999995, "text": " In the case of double blind reviews, both the authors and the reviewers remain anonymous"}, {"start": 44.879999999999995, "end": 46.08, "text": " to each other."}, {"start": 46.08, "end": 52.2, "text": " The reviewers now check whether the results are indeed significant, novel, credible and"}, {"start": 52.2, "end": 53.2, "text": " reproducible."}, {"start": 53.2, "end": 59.36, "text": " If the venue is really good, this process is very tough and thorough, and this process"}, {"start": 59.36, "end": 64.72, "text": " becomes the scientific version of beating the heck out of someone but in a constructive"}, {"start": 64.72, "end": 65.72, "text": " manner."}, {"start": 65.72, "end": 71.92, "text": " If the work is able to withstand serious criticism and takes the required boxes, it can proceed"}, {"start": 71.92, "end": 73.96000000000001, "text": " to get published at this venue."}, {"start": 73.96000000000001, "end": 76.2, "text": " Otherwise, it is rejected."}, {"start": 76.2, "end": 81.44, "text": " So what we heard so far is that the research work is being reviewed, however, scientists"}, {"start": 81.44, "end": 87.48, "text": " at the Google AI lab raised the issue that the reviewers themselves should also be reviewed."}, {"start": 87.48, "end": 92.75999999999999, "text": " Consider the fact that all scientists are expected to spend a certain percentage of their time"}, {"start": 92.75999999999999, "end": 94.67999999999999, "text": " to serve the greater good."}, {"start": 94.67999999999999, "end": 100.24, "text": " For instance, throughout my PhD studies, I have reviewed over 30 papers and I am not even"}, {"start": 100.24, "end": 101.24, "text": " done yet."}, {"start": 101.24, "end": 104.47999999999999, "text": " These paper reviews take place without compensation."}, {"start": 104.47999999999999, "end": 107.4, "text": " Let's call this issue number one for now."}, {"start": 107.4, "end": 112.24000000000001, "text": " issue number two is the explosive growth of the number of submissions over time at the"}, {"start": 112.24000000000001, "end": 116.44000000000001, "text": " most prestigious machine learning and computer vision conferences."}, {"start": 116.44000000000001, "end": 117.48, "text": " Have a look here."}, {"start": 117.48, "end": 123.24000000000001, "text": " It is of utmost importance that we create a review system that is as fair as possible,"}, {"start": 123.24000000000001, "end": 127.72, "text": " after all thousands of hours spent on research projects are at stake."}, {"start": 127.72, "end": 132.64000000000001, "text": " At these two issues together and we get a system where the average quality of the reviews"}, {"start": 132.64000000000001, "end": 136.12, "text": " will almost certainly decrease over time."}, {"start": 136.12, "end": 140.72, "text": " Quoting the authors, we believe the key issues here are structural."}, {"start": 140.72, "end": 146.0, "text": " Reviewers donate their valuable time and expertise anonymously as a service to the community"}, {"start": 146.0, "end": 152.72, "text": " with no compensation or attribution are increasingly taxed by a rapidly increasing number of submissions"}, {"start": 152.72, "end": 155.68, "text": " and are held to no-end force standards."}, {"start": 155.68, "end": 162.20000000000002, "text": " In two-minute papers episode number 84, so more than 200 episodes ago, we discussed the"}, {"start": 162.20000000000002, "end": 163.8, "text": " Nurebs experiment."}, {"start": 163.8, "end": 167.88000000000002, "text": " Leave a comment if you have been around back then and you enjoyed two-minute papers before"}, {"start": 167.88000000000002, "end": 169.08, "text": " it was cool."}, {"start": 169.08, "end": 174.28, "text": " But don't worry if this is not the case, this was long ago, so here's a short summary."}, {"start": 174.28, "end": 179.88000000000002, "text": " A large number of papers were secretly disseminated to multiple committees who would review it"}, {"start": 179.88000000000002, "end": 185.0, "text": " without knowing about each other and we would have a look whether they would accept or reject"}, {"start": 185.0, "end": 186.88000000000002, "text": " the same papers."}, {"start": 186.88000000000002, "end": 190.88000000000002, "text": " Re-review papers and see if the results are the same, if you will."}, {"start": 190.88, "end": 196.24, "text": " If we use sophisticated mathematics to create new scientific methods, why not use mathematics"}, {"start": 196.24, "end": 199.0, "text": " to evaluate our own processes?"}, {"start": 199.0, "end": 204.96, "text": " So after doing that, it was found that at a given prescribed acceptance ratio, there was"}, {"start": 204.96, "end": 209.24, "text": " a disagreement for 57% of the papers."}, {"start": 209.24, "end": 212.35999999999999, "text": " So is this number good or bad?"}, {"start": 212.35999999999999, "end": 217.88, "text": " Let's imagine a completely hypothetical committee that has no idea what they are doing and as"}, {"start": 217.88, "end": 224.24, "text": " a review, they basically toss up a coin and accept or reject the paper based on the result"}, {"start": 224.24, "end": 225.24, "text": " of the coin toss."}, {"start": 225.24, "end": 227.88, "text": " Let's call them the CoinFlip Committee."}, {"start": 227.88, "end": 232.4, "text": " The calculations conclude that the CoinFlip Committee would have a disagreement ratio of"}, {"start": 232.4, "end": 234.88, "text": " about 77%."}, {"start": 234.88, "end": 242.2, "text": " So experts, 57% disagreement, CoinFlip Committee, 77% disagreement."}, {"start": 242.2, "end": 247.76, "text": " And now to answer whether this is good or bad, this is hardly something to be proud of."}, {"start": 247.76, "end": 254.35999999999999, "text": " The consistency of expert reviewers is significantly closer to a coin flip than to a hypothetical"}, {"start": 254.35999999999999, "end": 256.56, "text": " perfect review process."}, {"start": 256.56, "end": 260.8, "text": " If that is not an indication that we have to do something about this, I am not sure what"}, {"start": 260.8, "end": 261.8, "text": " is."}, {"start": 261.8, "end": 268.2, "text": " So in this paper, the authors propose two important changes to the system to remedy these issues."}, {"start": 268.2, "end": 273.84, "text": " Remedy number one, they propose a rubric, a seven-point document to evaluate the quality"}, {"start": 273.84, "end": 274.84, "text": " of the reviews."}, {"start": 274.84, "end": 279.44, "text": " Again, not only the papers are reviewed, but the reviews themselves."}, {"start": 279.44, "end": 284.67999999999995, "text": " It is similar to the ones used in public schools to evaluate student performance to make sure"}, {"start": 284.67999999999995, "end": 290.15999999999997, "text": " whether the review was objective, consistent, and fair."}, {"start": 290.15999999999997, "end": 295.4, "text": " Remedy number two, reviewers should be incentivized and rewarded for their work."}, {"start": 295.4, "end": 301.28, "text": " The authors argue that a professional service should be worthy of professional compensation."}, {"start": 301.28, "end": 305.79999999999995, "text": " Now, of course, this sounds great, but this also requires money."}, {"start": 305.79999999999995, "end": 307.71999999999997, "text": " Where should the funds come from?"}, {"start": 307.71999999999997, "end": 310.28, "text": " The paper discusses several options."}, {"start": 310.28, "end": 315.67999999999995, "text": " For instance, this could be funded through sponsorships, or asking for a reasonable fee"}, {"start": 315.67999999999995, "end": 320.91999999999996, "text": " when submitting a paper for peer review and introducing a new fee structure for science"}, {"start": 320.91999999999996, "end": 322.32, "text": " conferences."}, {"start": 322.32, "end": 328.0, "text": " This is a short, five-page paper that is easily readable for everyone, raises excellent"}, {"start": 328.0, "end": 333.36, "text": " points for a very important problem, so needless to say, I highly recommend that you give it"}, {"start": 333.36, "end": 334.36, "text": " a read."}, {"start": 334.36, "end": 336.76, "text": " As always, the link is in the video description."}, {"start": 336.76, "end": 340.44, "text": " I hope this video will help raising more awareness to this problem."}, {"start": 340.44, "end": 345.68, "text": " If we are to create a fair system for evaluating research papers, we better get this right."}, {"start": 345.68, "end": 358.04, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=1gWpFuQlBsg
AlphaZero: DeepMind’s AI Works Smarter, not Harder
Errata: regarding the comment on the rules - the AI has no built-in domain knowledge but the basic rules of the game. 📝 The paper "AlphaZero: Shedding new light on the grand games of chess, shogi and Go" is available here: https://deepmind.com/blog/alphazero-shedding-new-light-grand-games-chess-shogi-and-go/ Kasparov’s editorial: http://science.sciencemag.org/content/362/6419/1087 ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #DeepMind #AlphaZero
Dear Fellow Scholars, this is 2 Minute Papers with Karo Ejona Ifaher. Finally, I have been waiting for quite a while to cover this amazing paper, which is about Alpha Zero. We have talked about Alpha Zero before, this is an AI that is able to play chess, go, and shogi, or in other words, Japanese chess on a remarkably high level. I will immediately start out by uttering the main point of this work. The point of Alpha Zero is not to solve chess or any of these games. Its main point is to show that a general AI can be created that can perform on a superhuman level on not one, but several different tasks at the same time. Let's have a look at this image, where you see a small part of the evaluation of Alpha Zero versus Stockfish, an amazing open-source chess engine, which has been consistently at or around the top computer chess players for many years now. Stockfish has an ill-orating of over 3200, which means that it has a win rate of over 90% against the best human players in the world. Now interestingly, comparing these algorithms is nowhere near as easy as it sounds. This sounds curious, so why is that? For instance, it is not enough to pit the two algorithms against each other and see who ends up winning. It matters what version of Stockfish is used, how many positions are the machines allowed to evaluate, how much thinking time they are allowed, the size of hash tables, the hardware being used, the number of threads being used, and so on. From the side of the chess community, these are the details that matter. However, from the side of the AI researcher, what matters most is to create a general algorithm that can play several different games on a superhuman level. The disconstrained it would really be a miracle if Alpha Zero were able to even put up a good fight against Stockfish. So, what happened? Alpha Zero played a lot of games that ended up as draws against Stockfish, and not only that, but whenever there was a winner, it was almost always Alpha Zero. Insanity, and what is quite remarkable is that Alpha Zero has only trained for 4 to 7 hours only through self-play. Comparatively, the development of the current version of Stockfish took more than 10 years. You can see how reliably this AI can be trained, the blue lines show the results of several training runs, and they all converge to the same result with only a tiny bit of deviation. Alpha Zero is also not a brute force algorithm, as it evaluates fewer positions per second than Stockfish. Casparov put it really well in his article where he said that Alpha Zero works smarter and not harder than previous techniques. Even Magnus Carson, chess master extraordinaire, said in an interview that during his games he often thinks what would Alpha Zero do in this case, which I found to be quite remarkable. Casparov also had many good things to say about the new Alpha Zero in a, let's say, very Casparov-esque manner. You also note that the key point is not whether the current version of Stockfish or the one from two months ago was used. The key point is that Stockfish is a brilliant chess engine, but it is not able to play go or any game other than chess. This is the main contribution that DeepMind was looking for with this work. This AI can master three games at once and a few more papers down the line, it may be able to master any perfect information game. All my goodness, what a time to be alive. We have only scratched the surface in this video. This was only a taste of the paper. The evaluation section in the paper is out of this world, so make sure to have a look in the video description and I am convinced that nearly any questions one can possibly think of is addressed there. I also link to Casparov's editorial on this topic. It is short and very readable. Give it a go. I hope this little taste of Alpha Zero inspires you to go out there and explore yourself. This is the main message of this series. Let me know in the comments what you think or if you found some cool other things related to Alpha Zero. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.08, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Ejona Ifaher."}, {"start": 4.08, "end": 10.24, "text": " Finally, I have been waiting for quite a while to cover this amazing paper, which is about"}, {"start": 10.24, "end": 11.52, "text": " Alpha Zero."}, {"start": 11.52, "end": 17.56, "text": " We have talked about Alpha Zero before, this is an AI that is able to play chess, go,"}, {"start": 17.56, "end": 22.56, "text": " and shogi, or in other words, Japanese chess on a remarkably high level."}, {"start": 22.56, "end": 27.04, "text": " I will immediately start out by uttering the main point of this work."}, {"start": 27.04, "end": 31.72, "text": " The point of Alpha Zero is not to solve chess or any of these games."}, {"start": 31.72, "end": 37.96, "text": " Its main point is to show that a general AI can be created that can perform on a superhuman"}, {"start": 37.96, "end": 43.04, "text": " level on not one, but several different tasks at the same time."}, {"start": 43.04, "end": 47.76, "text": " Let's have a look at this image, where you see a small part of the evaluation of Alpha"}, {"start": 47.76, "end": 53.96, "text": " Zero versus Stockfish, an amazing open-source chess engine, which has been consistently"}, {"start": 53.96, "end": 58.84, "text": " at or around the top computer chess players for many years now."}, {"start": 58.84, "end": 65.28, "text": " Stockfish has an ill-orating of over 3200, which means that it has a win rate of over 90%"}, {"start": 65.28, "end": 68.08, "text": " against the best human players in the world."}, {"start": 68.08, "end": 73.72, "text": " Now interestingly, comparing these algorithms is nowhere near as easy as it sounds."}, {"start": 73.72, "end": 76.4, "text": " This sounds curious, so why is that?"}, {"start": 76.4, "end": 81.24000000000001, "text": " For instance, it is not enough to pit the two algorithms against each other and see who"}, {"start": 81.24000000000001, "end": 82.48, "text": " ends up winning."}, {"start": 82.48, "end": 87.60000000000001, "text": " It matters what version of Stockfish is used, how many positions are the machines allowed"}, {"start": 87.60000000000001, "end": 93.84, "text": " to evaluate, how much thinking time they are allowed, the size of hash tables, the hardware"}, {"start": 93.84, "end": 97.96000000000001, "text": " being used, the number of threads being used, and so on."}, {"start": 97.96000000000001, "end": 101.80000000000001, "text": " From the side of the chess community, these are the details that matter."}, {"start": 101.80000000000001, "end": 108.24000000000001, "text": " However, from the side of the AI researcher, what matters most is to create a general algorithm"}, {"start": 108.24000000000001, "end": 112.28, "text": " that can play several different games on a superhuman level."}, {"start": 112.28, "end": 117.4, "text": " The disconstrained it would really be a miracle if Alpha Zero were able to even put up a good"}, {"start": 117.4, "end": 119.52, "text": " fight against Stockfish."}, {"start": 119.52, "end": 121.8, "text": " So, what happened?"}, {"start": 121.8, "end": 126.96000000000001, "text": " Alpha Zero played a lot of games that ended up as draws against Stockfish, and not only"}, {"start": 126.96000000000001, "end": 132.28, "text": " that, but whenever there was a winner, it was almost always Alpha Zero."}, {"start": 132.28, "end": 138.08, "text": " Insanity, and what is quite remarkable is that Alpha Zero has only trained for 4 to 7"}, {"start": 138.08, "end": 140.72, "text": " hours only through self-play."}, {"start": 140.72, "end": 146.28, "text": " Comparatively, the development of the current version of Stockfish took more than 10 years."}, {"start": 146.28, "end": 151.48, "text": " You can see how reliably this AI can be trained, the blue lines show the results of several"}, {"start": 151.48, "end": 159.32, "text": " training runs, and they all converge to the same result with only a tiny bit of deviation."}, {"start": 159.32, "end": 164.96, "text": " Alpha Zero is also not a brute force algorithm, as it evaluates fewer positions per second"}, {"start": 164.96, "end": 166.2, "text": " than Stockfish."}, {"start": 166.2, "end": 171.83999999999997, "text": " Casparov put it really well in his article where he said that Alpha Zero works smarter and"}, {"start": 171.83999999999997, "end": 174.88, "text": " not harder than previous techniques."}, {"start": 174.88, "end": 180.51999999999998, "text": " Even Magnus Carson, chess master extraordinaire, said in an interview that during his games"}, {"start": 180.51999999999998, "end": 186.76, "text": " he often thinks what would Alpha Zero do in this case, which I found to be quite remarkable."}, {"start": 186.76, "end": 191.79999999999998, "text": " Casparov also had many good things to say about the new Alpha Zero in a, let's say, very"}, {"start": 191.79999999999998, "end": 194.12, "text": " Casparov-esque manner."}, {"start": 194.12, "end": 198.84, "text": " You also note that the key point is not whether the current version of Stockfish or the one"}, {"start": 198.84, "end": 200.84, "text": " from two months ago was used."}, {"start": 200.84, "end": 205.72, "text": " The key point is that Stockfish is a brilliant chess engine, but it is not able to play"}, {"start": 205.72, "end": 208.84, "text": " go or any game other than chess."}, {"start": 208.84, "end": 212.8, "text": " This is the main contribution that DeepMind was looking for with this work."}, {"start": 212.8, "end": 218.04000000000002, "text": " This AI can master three games at once and a few more papers down the line, it may be"}, {"start": 218.04000000000002, "end": 221.72, "text": " able to master any perfect information game."}, {"start": 221.72, "end": 224.92, "text": " All my goodness, what a time to be alive."}, {"start": 224.92, "end": 227.64, "text": " We have only scratched the surface in this video."}, {"start": 227.64, "end": 230.28, "text": " This was only a taste of the paper."}, {"start": 230.28, "end": 234.76, "text": " The evaluation section in the paper is out of this world, so make sure to have a look"}, {"start": 234.76, "end": 240.44, "text": " in the video description and I am convinced that nearly any questions one can possibly think"}, {"start": 240.44, "end": 242.0, "text": " of is addressed there."}, {"start": 242.0, "end": 245.36, "text": " I also link to Casparov's editorial on this topic."}, {"start": 245.36, "end": 247.52, "text": " It is short and very readable."}, {"start": 247.52, "end": 248.52, "text": " Give it a go."}, {"start": 248.52, "end": 254.56, "text": " I hope this little taste of Alpha Zero inspires you to go out there and explore yourself."}, {"start": 254.56, "end": 256.84000000000003, "text": " This is the main message of this series."}, {"start": 256.84000000000003, "end": 261.12, "text": " Let me know in the comments what you think or if you found some cool other things related"}, {"start": 261.12, "end": 262.12, "text": " to Alpha Zero."}, {"start": 262.12, "end": 288.88, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=F84jaIR5Uxc
AI-Based 3D Pose Estimation: Almost Real Time!
📝 The paper "3D Human Pose Machines with Self-supervised Learning" and its source code is available here: https://arxiv.org/abs/1901.03798 http://www.sysu-hcp.net/3d_pose_ssl/ https://github.com/chanyn/3Dpose_ssl.git ❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Claudio Fernandes, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifahir. This episode is about a really nice new paper on pose estimation. Pose estimation means that we have an image or video of a human as an input and the output should be this skeleton that you see here that shows what the current position of this person is. Sounds alright, but what are the applications of this really? Well, it has a huge swath of applications. For instance, many of you often hear about motion capture for video games and animation movies, but it is also used in medical applications for finding abnormalities in a patient's posture, animal tracking, understanding sign language, pedestrian detection for self-driving cars, and much, much more. So, if we can do something like this in real time, that's hugely beneficial for many, many applications. However, this is a very challenging task because humans have a large variety of appearances, images come in all kinds of possible viewpoints, and as a result, the algorithm has to deal with occlusions as well. This is particularly hard. Have a look here. In these two cases, we don't see the left elbow, so it has to be inferred from seeing the remainder of the body. We have the reference solution on the right, and as you see here, this new method is significantly closer to it than any of the previous works. Quite remarkable. The main idea in this paper is that it works out the poses both in 2D and 3D, and contains a neural network that can convert to both directions between these representations, while retaining the consistencies between them. First, the technique comes up with an initial guess, and follows up by using these pose transformer networks to further refine this initial guess. This makes all the difference, and not only does it lead to high-quality results, but it also takes way less time than previous algorithms. We can expect to obtain a predicted pose in about 51 milliseconds, which is almost 20 frames per second. This is close to real time, and is more than enough for many of the applications we've talked about earlier. In the age of rapidly improving hardware, these are already fantastic results, both in terms of quality and performance, and not only the hardware, but the papers are also improving at a remarkable pace. What a time to be alive. The paper contains an exhaustive evaluation section. It is measured against a variety of high-quality solutions. I recommend that you have a look in the video description. I hope nobody is going to install a system in my lab that starts beeping every time I slouch a little, but I am really looking forward to benefiting from these other applications. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karojona Ifahir."}, {"start": 4.28, "end": 8.76, "text": " This episode is about a really nice new paper on pose estimation."}, {"start": 8.76, "end": 13.64, "text": " Pose estimation means that we have an image or video of a human as an input"}, {"start": 13.64, "end": 20.8, "text": " and the output should be this skeleton that you see here that shows what the current position of this person is."}, {"start": 20.8, "end": 24.6, "text": " Sounds alright, but what are the applications of this really?"}, {"start": 24.6, "end": 33.88, "text": " Well, it has a huge swath of applications. For instance, many of you often hear about motion capture for video games and animation movies,"}, {"start": 33.88, "end": 41.92, "text": " but it is also used in medical applications for finding abnormalities in a patient's posture, animal tracking,"}, {"start": 41.92, "end": 48.16, "text": " understanding sign language, pedestrian detection for self-driving cars, and much, much more."}, {"start": 48.16, "end": 54.8, "text": " So, if we can do something like this in real time, that's hugely beneficial for many, many applications."}, {"start": 54.8, "end": 60.8, "text": " However, this is a very challenging task because humans have a large variety of appearances,"}, {"start": 60.8, "end": 68.47999999999999, "text": " images come in all kinds of possible viewpoints, and as a result, the algorithm has to deal with occlusions as well."}, {"start": 68.47999999999999, "end": 71.52, "text": " This is particularly hard. Have a look here."}, {"start": 71.52, "end": 78.56, "text": " In these two cases, we don't see the left elbow, so it has to be inferred from seeing the remainder of the body."}, {"start": 78.56, "end": 84.72, "text": " We have the reference solution on the right, and as you see here, this new method is significantly closer to it"}, {"start": 84.72, "end": 88.64, "text": " than any of the previous works. Quite remarkable."}, {"start": 88.64, "end": 94.72, "text": " The main idea in this paper is that it works out the poses both in 2D and 3D,"}, {"start": 94.72, "end": 104.4, "text": " and contains a neural network that can convert to both directions between these representations, while retaining the consistencies between them."}, {"start": 104.4, "end": 111.12, "text": " First, the technique comes up with an initial guess, and follows up by using these pose transformer networks"}, {"start": 111.12, "end": 114.0, "text": " to further refine this initial guess."}, {"start": 114.0, "end": 118.8, "text": " This makes all the difference, and not only does it lead to high-quality results,"}, {"start": 118.8, "end": 127.28, "text": " but it also takes way less time than previous algorithms. We can expect to obtain a predicted pose in about 51 milliseconds,"}, {"start": 127.28, "end": 129.84, "text": " which is almost 20 frames per second."}, {"start": 129.84, "end": 136.0, "text": " This is close to real time, and is more than enough for many of the applications we've talked about earlier."}, {"start": 136.0, "end": 143.36, "text": " In the age of rapidly improving hardware, these are already fantastic results, both in terms of quality and performance,"}, {"start": 143.36, "end": 149.12, "text": " and not only the hardware, but the papers are also improving at a remarkable pace."}, {"start": 149.12, "end": 154.08, "text": " What a time to be alive. The paper contains an exhaustive evaluation section."}, {"start": 154.08, "end": 160.88000000000002, "text": " It is measured against a variety of high-quality solutions. I recommend that you have a look in the video description."}, {"start": 160.88000000000002, "end": 167.04000000000002, "text": " I hope nobody is going to install a system in my lab that starts beeping every time I slouch a little,"}, {"start": 167.04000000000002, "end": 171.04000000000002, "text": " but I am really looking forward to benefiting from these other applications."}, {"start": 171.04, "end": 175.67999999999998, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=lws-2u3LbYg
This AI Learned Image Decolorization..and More
❤️ Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers 📝 The paper "Deep Feature Consistent Deep Image Transformations: Downscaling, Decolorization and HDR Tone Mapping" is available here: https://arxiv.org/abs/1707.09482 https://houxianxu.github.io/assets/project/dfc-dit 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-583092/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu 👨‍💻 Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Joel Naifahir. Whenever we build a website, a video game, or do any sort of photography and image manipulation, we often encounter the problems of image downscaling, decalorization, and HDR tone mapping. This work offers us one technique that can do all three of these really well. But first, before we proceed, why are we talking about downscaling? We are in the age of AI, where a computer program can beat the best players in chess and go, so why talk about such a trivial challenge? Well, have a look here. Imagine that we have this high fidelity input image, and due to file size constraints, we have to produce a smaller version of it. If we do it naively, this is what it looks like. Not great, right? To do a better job at this, our goal would be that the size of the image would be reduced, but while still retaining the intricate details of this image. Here are two classical downscaling techniques. Better, but the texture of the skin is almost completely lost. Have a look at this. This is what this learning-based technique came up with. Really good, right? It can also perform decalorization. Again, a problem that sounds trivial for the unassuming scholar, but when taking a closer look, we notice that there are many different ways of doing this, and somehow we seek a decalorized image that still relates to the original as faithfully as possible. Here you see the previous methods that are not bad at all, but this new technique is great at retaining the contrast between the flower and its green leaves. At this point, it is clear that deciding which output is the best is highly subjective. We'll get back to that in a moment. It is also capable of doing HDR tone mapping. This is something that we do when we capture an image with a device that supports a wide dynamic range, in other words, the wide range of colors, and we wish to display it on our monitor, which has a more limited dynamic range. And again, clearly, there are many ways to do that. Welcome to the wondrous world of tone mapping. Note that there are hundreds upon hundreds of algorithms to perform these operations in computer graphics research. And also note that these are very complex algorithms that took decades for smart researchers to come up with. So the season fellow scholar shall immediately ask why talk about this work at all? What's so interesting about it? The goal here is to create a little more general learning-based method that can do a great job at not one, but all three of these problems. But how great exactly? And how do we decide how good these images are? To answer both of these questions at the same time, if you've been watching this series for a while, then you are indeed right. The authors created a user study, which shows that for all three of these tasks, according to the users, the new method smokes the competition. It is not only more general, but also better than most of the published techniques. For instance, Reinhardt's amazing tone mapper has been an industry standard for decades now. And look, almost 75% of the people prefer this new method over that. What required super smart researchers before can now be done with the learning algorithm. Unreal. What a time to be alive. A key idea for this algorithm is that this convolution on your own network that you see on the left is able to produce all three of these operations at the same time and to perform that it is instructed by another neural network to do this in a way that preserves the visual integrity of the input images. Make sure to have a look at the paper for more details on how this perceptual loss function is defined. And if you wish to help us tell these amazing stories to even more people, please consider supporting us on Patreon. Your unwavering support on Patreon is the reason why this show can exist and you can also pick up cool perks there like watching these videos in early access deciding the order of the next few episodes or even getting your name showcased in the video description as a key supporter. You can find us at patreon.com slash 2 minute papers or as always just click the link in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Joel Naifahir."}, {"start": 4.32, "end": 10.68, "text": " Whenever we build a website, a video game, or do any sort of photography and image manipulation,"}, {"start": 10.68, "end": 18.2, "text": " we often encounter the problems of image downscaling, decalorization, and HDR tone mapping."}, {"start": 18.2, "end": 23.32, "text": " This work offers us one technique that can do all three of these really well."}, {"start": 23.32, "end": 28.12, "text": " But first, before we proceed, why are we talking about downscaling?"}, {"start": 28.12, "end": 33.480000000000004, "text": " We are in the age of AI, where a computer program can beat the best players in chess and"}, {"start": 33.480000000000004, "end": 37.480000000000004, "text": " go, so why talk about such a trivial challenge?"}, {"start": 37.480000000000004, "end": 40.0, "text": " Well, have a look here."}, {"start": 40.0, "end": 45.0, "text": " Imagine that we have this high fidelity input image, and due to file size constraints, we"}, {"start": 45.0, "end": 47.56, "text": " have to produce a smaller version of it."}, {"start": 47.56, "end": 50.92, "text": " If we do it naively, this is what it looks like."}, {"start": 50.92, "end": 52.56, "text": " Not great, right?"}, {"start": 52.56, "end": 58.08, "text": " To do a better job at this, our goal would be that the size of the image would be reduced,"}, {"start": 58.08, "end": 62.68, "text": " but while still retaining the intricate details of this image."}, {"start": 62.68, "end": 65.64, "text": " Here are two classical downscaling techniques."}, {"start": 65.64, "end": 70.36, "text": " Better, but the texture of the skin is almost completely lost."}, {"start": 70.36, "end": 72.0, "text": " Have a look at this."}, {"start": 72.0, "end": 74.84, "text": " This is what this learning-based technique came up with."}, {"start": 74.84, "end": 76.75999999999999, "text": " Really good, right?"}, {"start": 76.75999999999999, "end": 79.32, "text": " It can also perform decalorization."}, {"start": 79.32, "end": 85.08, "text": " Again, a problem that sounds trivial for the unassuming scholar, but when taking a closer"}, {"start": 85.08, "end": 91.24, "text": " look, we notice that there are many different ways of doing this, and somehow we seek a decalorized"}, {"start": 91.24, "end": 96.67999999999999, "text": " image that still relates to the original as faithfully as possible."}, {"start": 96.67999999999999, "end": 101.52, "text": " Here you see the previous methods that are not bad at all, but this new technique is great"}, {"start": 101.52, "end": 105.8, "text": " at retaining the contrast between the flower and its green leaves."}, {"start": 105.8, "end": 110.72, "text": " At this point, it is clear that deciding which output is the best is highly subjective."}, {"start": 110.72, "end": 113.36, "text": " We'll get back to that in a moment."}, {"start": 113.36, "end": 117.24, "text": " It is also capable of doing HDR tone mapping."}, {"start": 117.24, "end": 122.03999999999999, "text": " This is something that we do when we capture an image with a device that supports a wide"}, {"start": 122.03999999999999, "end": 127.64, "text": " dynamic range, in other words, the wide range of colors, and we wish to display it on our"}, {"start": 127.64, "end": 131.4, "text": " monitor, which has a more limited dynamic range."}, {"start": 131.4, "end": 134.6, "text": " And again, clearly, there are many ways to do that."}, {"start": 134.6, "end": 137.56, "text": " Welcome to the wondrous world of tone mapping."}, {"start": 137.56, "end": 142.04, "text": " Note that there are hundreds upon hundreds of algorithms to perform these operations in"}, {"start": 142.04, "end": 143.95999999999998, "text": " computer graphics research."}, {"start": 143.95999999999998, "end": 149.23999999999998, "text": " And also note that these are very complex algorithms that took decades for smart researchers"}, {"start": 149.23999999999998, "end": 150.68, "text": " to come up with."}, {"start": 150.68, "end": 156.16, "text": " So the season fellow scholar shall immediately ask why talk about this work at all?"}, {"start": 156.16, "end": 158.39999999999998, "text": " What's so interesting about it?"}, {"start": 158.39999999999998, "end": 164.07999999999998, "text": " The goal here is to create a little more general learning-based method that can do a great job"}, {"start": 164.07999999999998, "end": 167.95999999999998, "text": " at not one, but all three of these problems."}, {"start": 167.95999999999998, "end": 169.48, "text": " But how great exactly?"}, {"start": 169.48, "end": 172.67999999999998, "text": " And how do we decide how good these images are?"}, {"start": 172.67999999999998, "end": 177.23999999999998, "text": " To answer both of these questions at the same time, if you've been watching this series"}, {"start": 177.23999999999998, "end": 180.04, "text": " for a while, then you are indeed right."}, {"start": 180.04, "end": 185.64, "text": " The authors created a user study, which shows that for all three of these tasks, according"}, {"start": 185.64, "end": 189.48, "text": " to the users, the new method smokes the competition."}, {"start": 189.48, "end": 194.23999999999998, "text": " It is not only more general, but also better than most of the published techniques."}, {"start": 194.23999999999998, "end": 198.95999999999998, "text": " For instance, Reinhardt's amazing tone mapper has been an industry standard for decades"}, {"start": 198.96, "end": 199.96, "text": " now."}, {"start": 199.96, "end": 205.64000000000001, "text": " And look, almost 75% of the people prefer this new method over that."}, {"start": 205.64000000000001, "end": 212.16, "text": " What required super smart researchers before can now be done with the learning algorithm."}, {"start": 212.16, "end": 213.32, "text": " Unreal."}, {"start": 213.32, "end": 214.68, "text": " What a time to be alive."}, {"start": 214.68, "end": 219.44, "text": " A key idea for this algorithm is that this convolution on your own network that you see"}, {"start": 219.44, "end": 225.4, "text": " on the left is able to produce all three of these operations at the same time and to perform"}, {"start": 225.4, "end": 231.64000000000001, "text": " that it is instructed by another neural network to do this in a way that preserves the visual"}, {"start": 231.64000000000001, "end": 234.16, "text": " integrity of the input images."}, {"start": 234.16, "end": 238.4, "text": " Make sure to have a look at the paper for more details on how this perceptual loss function"}, {"start": 238.4, "end": 239.4, "text": " is defined."}, {"start": 239.4, "end": 244.6, "text": " And if you wish to help us tell these amazing stories to even more people, please consider"}, {"start": 244.6, "end": 246.68, "text": " supporting us on Patreon."}, {"start": 246.68, "end": 252.4, "text": " Your unwavering support on Patreon is the reason why this show can exist and you can also"}, {"start": 252.4, "end": 258.12, "text": " pick up cool perks there like watching these videos in early access deciding the order"}, {"start": 258.12, "end": 263.88, "text": " of the next few episodes or even getting your name showcased in the video description"}, {"start": 263.88, "end": 265.36, "text": " as a key supporter."}, {"start": 265.36, "end": 272.04, "text": " You can find us at patreon.com slash 2 minute papers or as always just click the link in"}, {"start": 272.04, "end": 273.24, "text": " the video description."}, {"start": 273.24, "end": 282.6, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=OwRuzn3RAhA
Extracting Rotations The Right Way
The paper "A Robust Method to Extract the Rotational Part of Deformations" is available here: http://matthias-mueller-fischer.ch/publications/stablePolarDecomp.pdf Our work with Activision-Blizzard is available here: › Project page: https://users.cg.tuwien.ac.at/zsolnai/gfx/separable-subsurface-scattering-with-activision-blizzard/ › Video: https://www.youtube.com/watch?v=72_iAlYwl0c Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers 🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This paper is about creating high-quality physics simulations, and is, in my opinion, one of the gems very few people know about. In these physical simulations, we have objects that undergo a lot of tormenting. For instance, they have to endure all kinds of deformations, rotations, and, of course, being pushed around. A subset of these simulation techniques requires us to be able to look at these deformations and forget about anything they do other than the rotational part. Don't push it, don't squish it, just take the rotational part. Here, the full deformation transform is shown with red, and the extracted rotational part is shown by the green indicators here. This problem is not particularly hard and has been studied for decades, so we have excellent solutions to this. For instance, techniques that we refer to as polar decomposition, singular value decomposition, and more. By the way, in our earlier project together with the Activision Blizzard Company, we also used the singular value decomposition to compute the scattering of light within our skin and other translucent materials. I've put a link in the video description, make sure to have a look. Okay, so if a bunch of techniques already exist to perform this, why do we need to invent anything here? Why make a video about something that has been solved many decades ago? Well, here's why. We don't have anything yet that is criterion 1, robust, which means that it works perfectly all the time. Even a slight inaccuracy is going to make an object implode our simulations, so we better get something that is robust. And since these physical simulations are typically implemented on the graphics card, criterion 2, we need something that is well suited for that and is as simple as possible. It turns out none of the existing techniques check both of these two boxes. If you start reading the paper, you will see a derivation of this new solution, a mathematical proof that it is true and works all the time. And then, as an application, it shows fun physical simulations that utilize this technique. You can see here that these simulations are stable, no objects are imploding, although this extremely drunk dragon is showing a formidable attempt at doing that. Ouch! All the contortions and movements are modeled really well over a long time frame, and the original shape of the dragon can be recovered without any significant numericulars. Finally, it also compares the source code for a previous method and the new method. As you see, there is a vast difference in terms of complexity that favors the new method. It is short, does not involve a lot of branching decisions and is therefore an excellent candidate to run on state of the graphics cards. What I really like in this paper is that it does not present something and claims that, well, this seems to work. It first starts out with a crystal clear problem statement that is impossible to misunderstand. Then, the first part of the paper is pure mathematics, proves the validity of a new technique, and then drops it into a physical simulation, showing that it is indeed what we were looking for. And finally, a super simple piece of source code is provided so anyone can use it almost immediately. This is one of the purest computer graphics paper out there I've seen in a while. Make sure to have a look in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.14, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.14, "end": 12.16, "text": " This paper is about creating high-quality physics simulations, and is, in my opinion, one of the gems very few people know about."}, {"start": 12.16, "end": 17.02, "text": " In these physical simulations, we have objects that undergo a lot of tormenting."}, {"start": 17.02, "end": 24.16, "text": " For instance, they have to endure all kinds of deformations, rotations, and, of course, being pushed around."}, {"start": 24.16, "end": 34.1, "text": " A subset of these simulation techniques requires us to be able to look at these deformations and forget about anything they do other than the rotational part."}, {"start": 34.1, "end": 38.26, "text": " Don't push it, don't squish it, just take the rotational part."}, {"start": 38.26, "end": 45.92, "text": " Here, the full deformation transform is shown with red, and the extracted rotational part is shown by the green indicators here."}, {"start": 45.92, "end": 52.260000000000005, "text": " This problem is not particularly hard and has been studied for decades, so we have excellent solutions to this."}, {"start": 52.26, "end": 58.8, "text": " For instance, techniques that we refer to as polar decomposition, singular value decomposition, and more."}, {"start": 58.8, "end": 71.24, "text": " By the way, in our earlier project together with the Activision Blizzard Company, we also used the singular value decomposition to compute the scattering of light within our skin and other translucent materials."}, {"start": 71.24, "end": 74.64, "text": " I've put a link in the video description, make sure to have a look."}, {"start": 74.64, "end": 81.32, "text": " Okay, so if a bunch of techniques already exist to perform this, why do we need to invent anything here?"}, {"start": 81.32, "end": 86.27999999999999, "text": " Why make a video about something that has been solved many decades ago?"}, {"start": 86.27999999999999, "end": 87.88, "text": " Well, here's why."}, {"start": 87.88, "end": 95.16, "text": " We don't have anything yet that is criterion 1, robust, which means that it works perfectly all the time."}, {"start": 95.16, "end": 102.39999999999999, "text": " Even a slight inaccuracy is going to make an object implode our simulations, so we better get something that is robust."}, {"start": 102.39999999999999, "end": 108.8, "text": " And since these physical simulations are typically implemented on the graphics card, criterion 2,"}, {"start": 108.8, "end": 114.28, "text": " we need something that is well suited for that and is as simple as possible."}, {"start": 114.28, "end": 118.8, "text": " It turns out none of the existing techniques check both of these two boxes."}, {"start": 118.8, "end": 127.28, "text": " If you start reading the paper, you will see a derivation of this new solution, a mathematical proof that it is true and works all the time."}, {"start": 127.28, "end": 132.96, "text": " And then, as an application, it shows fun physical simulations that utilize this technique."}, {"start": 132.96, "end": 142.56, "text": " You can see here that these simulations are stable, no objects are imploding, although this extremely drunk dragon is showing a formidable attempt at doing that."}, {"start": 142.56, "end": 143.52, "text": " Ouch!"}, {"start": 143.52, "end": 153.56, "text": " All the contortions and movements are modeled really well over a long time frame, and the original shape of the dragon can be recovered without any significant numericulars."}, {"start": 153.56, "end": 159.52, "text": " Finally, it also compares the source code for a previous method and the new method."}, {"start": 159.52, "end": 164.48000000000002, "text": " As you see, there is a vast difference in terms of complexity that favors the new method."}, {"start": 164.48000000000002, "end": 173.0, "text": " It is short, does not involve a lot of branching decisions and is therefore an excellent candidate to run on state of the graphics cards."}, {"start": 173.0, "end": 179.84, "text": " What I really like in this paper is that it does not present something and claims that, well, this seems to work."}, {"start": 179.84, "end": 185.48000000000002, "text": " It first starts out with a crystal clear problem statement that is impossible to misunderstand."}, {"start": 185.48, "end": 197.39999999999998, "text": " Then, the first part of the paper is pure mathematics, proves the validity of a new technique, and then drops it into a physical simulation, showing that it is indeed what we were looking for."}, {"start": 197.39999999999998, "end": 203.79999999999998, "text": " And finally, a super simple piece of source code is provided so anyone can use it almost immediately."}, {"start": 203.79999999999998, "end": 208.2, "text": " This is one of the purest computer graphics paper out there I've seen in a while."}, {"start": 208.2, "end": 210.6, "text": " Make sure to have a look in the video description."}, {"start": 210.6, "end": 216.04, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6fo5NhnyR8I
OpenAI - Learning Dexterous In-Hand Manipulation
Check out "Superintelligence: Paths, Dangers, Strategies" on Audible: US: https://amzn.to/2RXr32F EU: https://amzn.to/2SqauwI The paper "Learning Dexterous In-Hand Manipulation" is available here: https://blog.openai.com/learning-dexterity/ https://arxiv.org/abs/1808.00177 Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. This work is about open AI's technique that teaches a robot arm to dexterously manipulate a block to a target state. And in this project they did one of my favorite things, which is first training an AI within a simulation and then deploying it into the real world. And in the best case scenario, this knowledge from the simulation will actually generalize to the real world. However, while we are in the simulation, we can break free from the limitations of worldly things such as hardware, movement speed, or even time itself. So how is that possible? The limitation on the number of experiments we can run in a simulation is bounded by not our time, which is scarce, but how powerful our hardware is, which is abundant as it is accelerating at a nearly exponential pace. And this is the reason why open AI's and deep-mides AI was able to train for 200 years worth of games before first playing a human pro player. This sounds great, but the simulation is always more crude than the real world, so how do we know for sure that we created something that will indeed be useful in the real world and not just in the simulation? Let's try an analogy. Think of the machine as a student and the simulation would be its textbook that it learns from. If the textbook contains only a few trivial problems to learn from, when the day of the exam comes, if the exam is any good, the student will fail. The exam is the equivalent of deploying the machine into the real world, and apparently the real world is a damn good exam. So how can we prepare a student to do well on this exam? Well, we have to provide them with a textbook that contains not only a lot of problems, but also a diverse set of challenges as well. This is what machine learning researchers call domain randomization. This means that we teach an AI program in different virtual worlds and in each one of them we change parameters like how fast the hand is, what color and weight the cube is, and more. This is a proper textbook, which means that after this kind of training, this AI can deal with new and unexpected situations. The knowledge that it has obtained is so general that we can change even the geometry of the target object and the machine will still be able to manipulate it correctly. Outstanding. To implement this idea, scientists at OpenAI trained not one agent, but a selection of agents in these randomized environments. The first main component of this system is a pose estimator. This module looks at the cube from three angles and predicts the position and orientation of the block and is implemented through a convolutional neural network. The advantage of this is that we can generate a near infinite amount of training data ourselves. You can see here that when the AI looks at real images, it is only a few degrees worse than in the simulation when estimating angles, which is the case of the excellent textbook. I would not be surprised if this accuracy exceeds the capabilities of an ordinary human given that it can perform this many times within a second. Then, the next part is choosing what the next action should be. Of course, we seek to rotate this cube in a way that brings us closer to our objective. This is done by a reinforcement learning technique which uses similar modules as OpenAI's previous algorithm that learn to play Dota 2 really well. Another testament to how general these learning algorithms are. I also recommend checking out OpenAI's video on this work in the video description. Now, I always read in the comments here on YouTube that many of you are longing for more. Five minute papers, ten minute papers, two hour papers were among the requests I heard from you before. And of course, I am also longing for more as I have quite a few questions that keep me up at night. Is it possible for us to ever come up with a super intelligent AI? If yes, how? What types of these AI could exist? Should we be worried? If you are also looking for some answers, we are now trying out a sponsorship with Audible, and I have a great recommendation for you, which is none other than the book Super Intelligence by Nick Bostrom. It addresses all of these questions really well, and if you sign up under the link below in the video description, you will get this book free of charge. Whenever you have to do some work around the house, commute to school or work, just pop in a pair of headphones and listen for free. Some more AI for you while doing something tedious. That's as good as it gets. If you feel that the start of the book is a little slow for you, make sure to jump to the chapter by the name is the default outcome, Doom. But buckle up because there is going to be fireworks from that point in the book. We thank Audible for supporting this video and send a big thank you for all of you who sign up and support the series. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.5600000000000005, "end": 9.76, "text": " This work is about open AI's technique that teaches a robot arm to dexterously manipulate"}, {"start": 9.76, "end": 12.120000000000001, "text": " a block to a target state."}, {"start": 12.120000000000001, "end": 18.0, "text": " And in this project they did one of my favorite things, which is first training an AI within"}, {"start": 18.0, "end": 22.52, "text": " a simulation and then deploying it into the real world."}, {"start": 22.52, "end": 27.36, "text": " And in the best case scenario, this knowledge from the simulation will actually generalize"}, {"start": 27.36, "end": 28.560000000000002, "text": " to the real world."}, {"start": 28.56, "end": 33.44, "text": " However, while we are in the simulation, we can break free from the limitations of worldly"}, {"start": 33.44, "end": 39.28, "text": " things such as hardware, movement speed, or even time itself."}, {"start": 39.28, "end": 40.92, "text": " So how is that possible?"}, {"start": 40.92, "end": 45.64, "text": " The limitation on the number of experiments we can run in a simulation is bounded by not"}, {"start": 45.64, "end": 51.56, "text": " our time, which is scarce, but how powerful our hardware is, which is abundant as it"}, {"start": 51.56, "end": 55.08, "text": " is accelerating at a nearly exponential pace."}, {"start": 55.08, "end": 61.36, "text": " And this is the reason why open AI's and deep-mides AI was able to train for 200 years worth"}, {"start": 61.36, "end": 64.72, "text": " of games before first playing a human pro player."}, {"start": 64.72, "end": 70.0, "text": " This sounds great, but the simulation is always more crude than the real world, so how do we"}, {"start": 70.0, "end": 75.28, "text": " know for sure that we created something that will indeed be useful in the real world and"}, {"start": 75.28, "end": 77.4, "text": " not just in the simulation?"}, {"start": 77.4, "end": 79.03999999999999, "text": " Let's try an analogy."}, {"start": 79.03999999999999, "end": 84.64, "text": " Think of the machine as a student and the simulation would be its textbook that it learns from."}, {"start": 84.64, "end": 89.48, "text": " If the textbook contains only a few trivial problems to learn from, when the day of the"}, {"start": 89.48, "end": 94.28, "text": " exam comes, if the exam is any good, the student will fail."}, {"start": 94.28, "end": 99.16, "text": " The exam is the equivalent of deploying the machine into the real world, and apparently"}, {"start": 99.16, "end": 101.84, "text": " the real world is a damn good exam."}, {"start": 101.84, "end": 105.72, "text": " So how can we prepare a student to do well on this exam?"}, {"start": 105.72, "end": 111.32, "text": " Well, we have to provide them with a textbook that contains not only a lot of problems,"}, {"start": 111.32, "end": 114.6, "text": " but also a diverse set of challenges as well."}, {"start": 114.6, "end": 118.75999999999999, "text": " This is what machine learning researchers call domain randomization."}, {"start": 118.75999999999999, "end": 123.75999999999999, "text": " This means that we teach an AI program in different virtual worlds and in each one of"}, {"start": 123.75999999999999, "end": 130.07999999999998, "text": " them we change parameters like how fast the hand is, what color and weight the cube is,"}, {"start": 130.07999999999998, "end": 131.07999999999998, "text": " and more."}, {"start": 131.07999999999998, "end": 136.51999999999998, "text": " This is a proper textbook, which means that after this kind of training, this AI can deal"}, {"start": 136.51999999999998, "end": 139.44, "text": " with new and unexpected situations."}, {"start": 139.44, "end": 144.48, "text": " The knowledge that it has obtained is so general that we can change even the geometry of the"}, {"start": 144.48, "end": 149.6, "text": " target object and the machine will still be able to manipulate it correctly."}, {"start": 149.6, "end": 151.0, "text": " Outstanding."}, {"start": 151.0, "end": 156.72, "text": " To implement this idea, scientists at OpenAI trained not one agent, but a selection of"}, {"start": 156.72, "end": 159.39999999999998, "text": " agents in these randomized environments."}, {"start": 159.39999999999998, "end": 163.44, "text": " The first main component of this system is a pose estimator."}, {"start": 163.44, "end": 168.48, "text": " This module looks at the cube from three angles and predicts the position and orientation"}, {"start": 168.48, "end": 173.0, "text": " of the block and is implemented through a convolutional neural network."}, {"start": 173.0, "end": 178.56, "text": " The advantage of this is that we can generate a near infinite amount of training data ourselves."}, {"start": 178.56, "end": 184.08, "text": " You can see here that when the AI looks at real images, it is only a few degrees worse"}, {"start": 184.08, "end": 189.76, "text": " than in the simulation when estimating angles, which is the case of the excellent textbook."}, {"start": 189.76, "end": 195.16, "text": " I would not be surprised if this accuracy exceeds the capabilities of an ordinary human"}, {"start": 195.16, "end": 198.96, "text": " given that it can perform this many times within a second."}, {"start": 198.96, "end": 202.96, "text": " Then, the next part is choosing what the next action should be."}, {"start": 202.96, "end": 208.28, "text": " Of course, we seek to rotate this cube in a way that brings us closer to our objective."}, {"start": 208.28, "end": 213.4, "text": " This is done by a reinforcement learning technique which uses similar modules as OpenAI's"}, {"start": 213.4, "end": 218.0, "text": " previous algorithm that learn to play Dota 2 really well."}, {"start": 218.0, "end": 221.64000000000001, "text": " Another testament to how general these learning algorithms are."}, {"start": 221.64000000000001, "end": 226.04000000000002, "text": " I also recommend checking out OpenAI's video on this work in the video description."}, {"start": 226.04, "end": 231.92, "text": " Now, I always read in the comments here on YouTube that many of you are longing for more."}, {"start": 231.92, "end": 237.48, "text": " Five minute papers, ten minute papers, two hour papers were among the requests I heard"}, {"start": 237.48, "end": 238.79999999999998, "text": " from you before."}, {"start": 238.79999999999998, "end": 243.16, "text": " And of course, I am also longing for more as I have quite a few questions that keep"}, {"start": 243.16, "end": 244.56, "text": " me up at night."}, {"start": 244.56, "end": 249.07999999999998, "text": " Is it possible for us to ever come up with a super intelligent AI?"}, {"start": 249.07999999999998, "end": 250.95999999999998, "text": " If yes, how?"}, {"start": 250.95999999999998, "end": 253.6, "text": " What types of these AI could exist?"}, {"start": 253.6, "end": 254.95999999999998, "text": " Should we be worried?"}, {"start": 254.96, "end": 260.08, "text": " If you are also looking for some answers, we are now trying out a sponsorship with Audible,"}, {"start": 260.08, "end": 265.52, "text": " and I have a great recommendation for you, which is none other than the book Super Intelligence"}, {"start": 265.52, "end": 267.0, "text": " by Nick Bostrom."}, {"start": 267.0, "end": 271.8, "text": " It addresses all of these questions really well, and if you sign up under the link below"}, {"start": 271.8, "end": 275.96000000000004, "text": " in the video description, you will get this book free of charge."}, {"start": 275.96000000000004, "end": 280.72, "text": " Whenever you have to do some work around the house, commute to school or work, just pop"}, {"start": 280.72, "end": 283.92, "text": " in a pair of headphones and listen for free."}, {"start": 283.92, "end": 286.68, "text": " Some more AI for you while doing something tedious."}, {"start": 286.68, "end": 289.28000000000003, "text": " That's as good as it gets."}, {"start": 289.28000000000003, "end": 293.16, "text": " If you feel that the start of the book is a little slow for you, make sure to jump to"}, {"start": 293.16, "end": 297.08000000000004, "text": " the chapter by the name is the default outcome, Doom."}, {"start": 297.08000000000004, "end": 301.40000000000003, "text": " But buckle up because there is going to be fireworks from that point in the book."}, {"start": 301.40000000000003, "end": 306.20000000000005, "text": " We thank Audible for supporting this video and send a big thank you for all of you who"}, {"start": 306.20000000000005, "end": 308.24, "text": " sign up and support the series."}, {"start": 308.24, "end": 314.24, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=DMXvkbAtHNY
DeepMind’s AlphaStar Beats Humans 10-0 (or 1)
DeepMind's #AlphaStar blog post: https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/ Full event: https://www.youtube.com/watch?v=cUTMhmVh1qs Highlights: https://www.youtube.com/watch?v=6EQAsrfUIyo Agent visualization: https://www.youtube.com/watch?v=HcZ48JDamyk&feature=youtu.be #DeepMind's Reddit AMA: https://old.reddit.com/r/MachineLearning/comments/ajgzoc/we_are_oriol_vinyals_and_david_silver_from/ APM comments within the AMA: https://old.reddit.com/r/MachineLearning/comments/ajgzoc/we_are_oriol_vinyals_and_david_silver_from/eexs0pd/ Mana’s personal experience: https://www.youtube.com/watch?v=zgIFoepzhIo Artosis’s analysis: https://www.youtube.com/watch?v=_YWmU-E2WFc Brownbear’s analysis: https://www.youtube.com/watch?v=sxQ-VRq3y9E WinterStarcraft’s analysis: https://www.youtube.com/watch?v=H3MCb4W7-kM Watch these videos in early access: › https://www.patreon.com/TwoMinutePapers Errara: - The in-game has been fixed to run in real time. We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karo Isholaifahir. I think this is one of the more important things that happened in AI research lately. In the last few years we have seen Deep Mind defeat the best goal players in the world, and after Open AI's venture in the game of Dota 2, it's time for Deep Mind to shine again, as they take on Starcraft 2 a real-time strategy game. The depth and the amount of skill required to play this game is simply astounding. The search space of Starcraft 2 is so vast that it exceeds both chess and even go by a significant margin. Also, it is a game that requires a great deal of mechanical skill, split second decision-making, and we have imperfect information as we only see what our units can see. A nightmare situation for any AI. Deep Mind invited a beloved pro player, TLO, to play a few games against their new Starcraft 2 AI that goes by the name Alpha Star. Note that TLO is a professional player who is easily in the top 1% of players or even better, mid-grandmaster for those who play Starcraft 2. This video is about what happened during this event, and later I will make another video that describes the algorithm that was used to create this AI. The paper is still under review, so it will take a little time until I can get my hands on it. At the end of this video, you will also see the inner workings of this AI. Let's dive in. This is an AI that looked at a few games played by human players, and after that initial step, it learned by playing itself for about 200 years. In our next episode, you will see how this is even possible, so I hope you are subscribed to the series. You see here that the AI controls the blue units, and TLO, the human player, plays red. Right at the start of the first game, the AI did something interesting. In fact, what is interesting is what it didn't do. It started to create new buildings next to its nexus, instead of building a wall-off that you can see here. Using a wall-off is considered standard practice in most games, and the AI used these buildings to not wall-off the entrance, but shield away the workers from possible attacks. Now note that this is not unheard of, but this is also not a strategy that is widely played today and is considered non-standard. It also built more worker units than what is universally accepted as standard, we found out later that this was partly done in anticipation of losing a few of them early on. Very cool. One, almost before we even knew what happened, it won the first game a little more than 7 minutes in, which is very quick, noting that in-game time is a little faster than real time. The thought process of TLO at this point is that that's interesting, but okay, well, the AI plays aggressively and managed to pull this one off. No big deal. We will fire up the second game, and in the meantime, a few interesting details. The goal of setting up the details of this algorithm was that the number of actions performed by the AI roughly matches a human player, and hopefully it still plays as well or better. It has to make meaningful strategic decisions. You see here that this checks out for the average actions every minute, but if you look here, you see around the tail end here that there are times when it performs more actions than humans, and this may enable play styles that are not accessible for human players. However, note that many times it also does miraculous things with very few actions. Now, what about another important detail? Reaction time. The reaction time of the AI is set to 350 milliseconds, which is quite slow. That's excellent news because this is usually a common angle of criticism for game AI's. The AI also sees the whole map at once, but it is not given more information than what its units can see. This perhaps is the most commonly misunderstood detail, so it is worth noting. So in other words, it sees exactly what a human would see if the human would move the camera around very quickly, but it doesn't have to move the camera, which adds additional actions and cognitive load to the human, so one might say that the AI has an edge here. The AI plays these games independently. What's more, each game was played by a different AI, which also means that they do not memorize what happened in the last game, like a human would. Early in the next game, we can see the utility of the wall of inaction, which is able to completely prevent the AI's early attack. Later that game, the AI used disruptors, the unit which, if controlled with such level of expertise, can decimate the army of the opponent with area damage by killing multiple units at once. It has done an outstanding job picking away at the army of TLO. Then, after getting a significant advantage, Alpha Star loses it with a few sloppy plays and by deciding to engage aggressively while standing in tight choke points. You can see that this is not such a great idea. This was quite surprising as this is considered to be Starcraft 101 knowledge right there. During the remainder of the match, the commentators mentioned that they play and watch games all the time and the AI came up with an army composition that they have never seen during a professional match. And the AI won this one too. After this game, it became clear that these agents can play any style in the game, which is terrifying. Here you can see an alternative visualization that shows a little more of the inner workings of the neural network. We can see what information it gets from the game, the visualization of neurons that get activated within the network, what locations and units are considered for the next actions, and whether the AI predicts itself as the winner or the loser of the game. If you look carefully, you will also see the moment when the agent becomes certain that it will win this game. I could look at this all day long and if you feel the same way, make sure to visit the video description. I have a link to the source video for you. The final result against TLO was 5-0. I actually lost everything all the fire matches. So that's something. And he mentioned that Alpha Star played very much like a human does and almost always managed to outmaneuver him. However, TLO also mentioned that he is confident that upon playing more training matches against these agents, he would be able to defeat the AI. I hope he will be given a chance to do that. This AI seems strong, but still beatable. I would also note that many of you would probably expect the later versions of Alpha Star to be way better than this one. The good news is that the story continues and we'll see whether that's true. So at this point, the DeepMind scientists said maybe we could try to be a bit more ambitious and asked, can you bring us someone better? And in the meantime, pressed that training button on the AI again. In comes mana, a top tier pro player, one of the best pro task players in the world. This was a nerve-wracking moment for DeepMind scientists as well because their agents played against each other, so they only knew the AI's win rate against a different AI. But they didn't know how they would compete against a top pro player. It may still have holes in its strategy. Who knows what would happen. Understandably, they had very little confidence in winning this one. What they didn't expect is that the new AI was not slightly improved or somewhat improved. No, no, no, no. This new AI was next level. This set of improved agents, among many other skills, had incredibly crisp micromanagement of each individual unit. In the first game, we've seen it pulling back injured units, but still letting them attack from afar masterfully, leading to an early win for the AI against mana in the first game. He and the commentators were equally shocked by how well the agent played. And I will add that I remember from watching many games from a now inactive player by the name Marine King a few years ago. And I vividly remember that he played some of his games so well, the commentator said that there is no better way to put it, he played like a god. I am almost afraid to say that this micromanagement was even more crisp than that. This AI plays phenomenal games. In later matches, the AI did things that seemed like blunders, like attacking on ramps and standing in choke points, or using unfavorable unit compositions and refusing to change it and get this. It's still one all of those games 5-0 against a top pro player. Let that sink in. The competition was closed by a match where the AI was asked to also do the camera management. The agent was still very competent, but somewhat weaker and as a result lost this game, hence the 0 or 1 part in the title. My impression is that it was asked to do something that it was not designed for and expect a future version to be able to handle this use case as well. I will also commend Mana for his solid game plan for this game and also huge respect for deep-mind for their sportsmanship. Interestingly, in this match, Mana also started a worker over saturation strategy that I mentioned earlier. This he learned from Alpha Star and used it in his winning game. Isn't that amazing? Deep-mind also offered a Reddit AMA where anyone could ask them questions to make sure to clear up any confusion. For instance, the actions per minute part has been addressed. I've included a link to that for you in the video description. To go from a turn-based, perfect information game, like Go, to a real-time strategy game of imperfect information in about a year sounds like science fiction to me. And yet, here it is. Also, note that Deep-mind's goal is not to create a godlike Starcraft II AI. They want to solve intelligence, not Starcraft II, and they use this game as a vehicle to demonstrate its long-term decision-making capabilities against human players. One more important thing to emphasize is that the building blocks of Alpha Star are meant to be reasonably general AI algorithms, which means that parts of this AI can be reused for other things. For instance, Demis Asabi mentioned weather prediction and climate modeling as examples. If you take only one thought from this video, let it be this one. I urge you to watch all the matches because what you are witnessing may very well be history in the making. I put a link to the whole event in the video description plus plenty more materials, including other people's analysis, Manus' personal experience of the event, his breakdown of his games, and what was going through his head during the event. I highly recommend checking out his fifth game, but really, go through them all. It's a ton of fun. I made sure to include a more skeptical analysis of the game as well to give you a balanced portfolio of insights. Also, huge respect for Deep-mind and the players who practice their chops for many, many years and have played really well under immense pressure. Thank you all for this delightful event. It really made my day. And the ultimate question is, how long did it take to train these agents? Two weeks. Wow. And what's more, after the training step, the AI can be deployed on an inexpensive consumer desktop machine. And this is only the first version. This is just a taste, and it would be hard to overstate how big of a milestone this is. And now, scientists that Deep-mind have sufficient data to calculate the amount of resources they need to spend to train the next even more improved agents. I am confident that they will also take into consideration the feedback from the Starcraft community when creating this next version. What a time to be alive. What do you think about all this? Any predictions? Is this harder than Dota 2? Let me know in the comments section below. And remember, we humans build up new strategies by learning from each other, and of course, the AI, as you have seen here, doesn't care about any of that. It doesn't need intuition and can come up with unusual strategies. The difference now is that these strategies work against some of the best human players. Now it's time for us to finally start learning from an AI. GG. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Isholaifahir."}, {"start": 4.4, "end": 9.92, "text": " I think this is one of the more important things that happened in AI research lately."}, {"start": 9.92, "end": 15.48, "text": " In the last few years we have seen Deep Mind defeat the best goal players in the world,"}, {"start": 15.48, "end": 21.44, "text": " and after Open AI's venture in the game of Dota 2, it's time for Deep Mind to shine"}, {"start": 21.44, "end": 26.240000000000002, "text": " again, as they take on Starcraft 2 a real-time strategy game."}, {"start": 26.24, "end": 31.68, "text": " The depth and the amount of skill required to play this game is simply astounding."}, {"start": 31.68, "end": 39.239999999999995, "text": " The search space of Starcraft 2 is so vast that it exceeds both chess and even go by a significant"}, {"start": 39.239999999999995, "end": 40.239999999999995, "text": " margin."}, {"start": 40.239999999999995, "end": 46.68, "text": " Also, it is a game that requires a great deal of mechanical skill, split second decision-making,"}, {"start": 46.68, "end": 52.480000000000004, "text": " and we have imperfect information as we only see what our units can see."}, {"start": 52.480000000000004, "end": 55.519999999999996, "text": " A nightmare situation for any AI."}, {"start": 55.52, "end": 61.400000000000006, "text": " Deep Mind invited a beloved pro player, TLO, to play a few games against their new Starcraft"}, {"start": 61.400000000000006, "end": 65.68, "text": " 2 AI that goes by the name Alpha Star."}, {"start": 65.68, "end": 71.68, "text": " Note that TLO is a professional player who is easily in the top 1% of players or even"}, {"start": 71.68, "end": 75.72, "text": " better, mid-grandmaster for those who play Starcraft 2."}, {"start": 75.72, "end": 80.96000000000001, "text": " This video is about what happened during this event, and later I will make another video"}, {"start": 80.96000000000001, "end": 85.04, "text": " that describes the algorithm that was used to create this AI."}, {"start": 85.04, "end": 89.32000000000001, "text": " The paper is still under review, so it will take a little time until I can get my hands"}, {"start": 89.32000000000001, "end": 90.32000000000001, "text": " on it."}, {"start": 90.32000000000001, "end": 95.12, "text": " At the end of this video, you will also see the inner workings of this AI."}, {"start": 95.12, "end": 96.32000000000001, "text": " Let's dive in."}, {"start": 96.32000000000001, "end": 102.76, "text": " This is an AI that looked at a few games played by human players, and after that initial step,"}, {"start": 102.76, "end": 107.48, "text": " it learned by playing itself for about 200 years."}, {"start": 107.48, "end": 112.08000000000001, "text": " In our next episode, you will see how this is even possible, so I hope you are subscribed"}, {"start": 112.08000000000001, "end": 113.36000000000001, "text": " to the series."}, {"start": 113.36, "end": 119.92, "text": " You see here that the AI controls the blue units, and TLO, the human player, plays red."}, {"start": 119.92, "end": 124.36, "text": " Right at the start of the first game, the AI did something interesting."}, {"start": 124.36, "end": 127.72, "text": " In fact, what is interesting is what it didn't do."}, {"start": 127.72, "end": 132.88, "text": " It started to create new buildings next to its nexus, instead of building a wall-off"}, {"start": 132.88, "end": 134.44, "text": " that you can see here."}, {"start": 134.44, "end": 139.72, "text": " Using a wall-off is considered standard practice in most games, and the AI used these buildings"}, {"start": 139.72, "end": 145.08, "text": " to not wall-off the entrance, but shield away the workers from possible attacks."}, {"start": 145.08, "end": 150.16, "text": " Now note that this is not unheard of, but this is also not a strategy that is widely played"}, {"start": 150.16, "end": 153.44, "text": " today and is considered non-standard."}, {"start": 153.44, "end": 158.36, "text": " It also built more worker units than what is universally accepted as standard, we found"}, {"start": 158.36, "end": 164.04, "text": " out later that this was partly done in anticipation of losing a few of them early on."}, {"start": 164.04, "end": 165.52, "text": " Very cool."}, {"start": 165.52, "end": 171.64000000000001, "text": " One, almost before we even knew what happened, it won the first game a little more than 7 minutes"}, {"start": 171.64000000000001, "end": 187.64000000000001, "text": " in, which is very quick, noting that in-game time is a little faster than real time."}, {"start": 187.64000000000001, "end": 193.56, "text": " The thought process of TLO at this point is that that's interesting, but okay, well,"}, {"start": 193.56, "end": 197.36, "text": " the AI plays aggressively and managed to pull this one off."}, {"start": 197.36, "end": 198.68, "text": " No big deal."}, {"start": 198.68, "end": 203.52, "text": " We will fire up the second game, and in the meantime, a few interesting details."}, {"start": 203.52, "end": 208.72, "text": " The goal of setting up the details of this algorithm was that the number of actions performed"}, {"start": 208.72, "end": 216.08, "text": " by the AI roughly matches a human player, and hopefully it still plays as well or better."}, {"start": 216.08, "end": 219.04, "text": " It has to make meaningful strategic decisions."}, {"start": 219.04, "end": 224.07999999999998, "text": " You see here that this checks out for the average actions every minute, but if you look here,"}, {"start": 224.07999999999998, "end": 228.76, "text": " you see around the tail end here that there are times when it performs more actions than"}, {"start": 228.76, "end": 234.35999999999999, "text": " humans, and this may enable play styles that are not accessible for human players."}, {"start": 234.35999999999999, "end": 240.64, "text": " However, note that many times it also does miraculous things with very few actions."}, {"start": 240.64, "end": 243.88, "text": " Now, what about another important detail?"}, {"start": 243.88, "end": 244.88, "text": " Reaction time."}, {"start": 244.88, "end": 250.44, "text": " The reaction time of the AI is set to 350 milliseconds, which is quite slow."}, {"start": 250.44, "end": 254.76, "text": " That's excellent news because this is usually a common angle of criticism for game"}, {"start": 254.76, "end": 255.76, "text": " AI's."}, {"start": 255.76, "end": 261.36, "text": " The AI also sees the whole map at once, but it is not given more information than what"}, {"start": 261.36, "end": 263.08, "text": " its units can see."}, {"start": 263.08, "end": 267.96, "text": " This perhaps is the most commonly misunderstood detail, so it is worth noting."}, {"start": 267.96, "end": 273.44, "text": " So in other words, it sees exactly what a human would see if the human would move the camera"}, {"start": 273.44, "end": 279.8, "text": " around very quickly, but it doesn't have to move the camera, which adds additional actions"}, {"start": 279.8, "end": 284.92, "text": " and cognitive load to the human, so one might say that the AI has an edge here."}, {"start": 284.92, "end": 287.76, "text": " The AI plays these games independently."}, {"start": 287.76, "end": 293.2, "text": " What's more, each game was played by a different AI, which also means that they do not memorize"}, {"start": 293.2, "end": 296.8, "text": " what happened in the last game, like a human would."}, {"start": 296.8, "end": 301.72, "text": " Early in the next game, we can see the utility of the wall of inaction, which is able to completely"}, {"start": 301.72, "end": 304.52000000000004, "text": " prevent the AI's early attack."}, {"start": 304.52000000000004, "end": 309.6, "text": " Later that game, the AI used disruptors, the unit which, if controlled with such level"}, {"start": 309.6, "end": 315.96000000000004, "text": " of expertise, can decimate the army of the opponent with area damage by killing multiple units"}, {"start": 315.96000000000004, "end": 317.04, "text": " at once."}, {"start": 317.04, "end": 321.52000000000004, "text": " It has done an outstanding job picking away at the army of TLO."}, {"start": 321.52000000000004, "end": 327.64000000000004, "text": " Then, after getting a significant advantage, Alpha Star loses it with a few sloppy plays"}, {"start": 327.64, "end": 333.08, "text": " and by deciding to engage aggressively while standing in tight choke points."}, {"start": 333.08, "end": 335.76, "text": " You can see that this is not such a great idea."}, {"start": 335.76, "end": 341.84, "text": " This was quite surprising as this is considered to be Starcraft 101 knowledge right there."}, {"start": 341.84, "end": 347.03999999999996, "text": " During the remainder of the match, the commentators mentioned that they play and watch games all"}, {"start": 347.03999999999996, "end": 352.4, "text": " the time and the AI came up with an army composition that they have never seen during"}, {"start": 352.4, "end": 354.24, "text": " a professional match."}, {"start": 354.24, "end": 357.0, "text": " And the AI won this one too."}, {"start": 357.0, "end": 361.96, "text": " After this game, it became clear that these agents can play any style in the game, which"}, {"start": 361.96, "end": 363.6, "text": " is terrifying."}, {"start": 363.6, "end": 368.68, "text": " Here you can see an alternative visualization that shows a little more of the inner workings"}, {"start": 368.68, "end": 370.48, "text": " of the neural network."}, {"start": 370.48, "end": 375.04, "text": " We can see what information it gets from the game, the visualization of neurons that get"}, {"start": 375.04, "end": 381.64, "text": " activated within the network, what locations and units are considered for the next actions,"}, {"start": 381.64, "end": 386.72, "text": " and whether the AI predicts itself as the winner or the loser of the game."}, {"start": 386.72, "end": 391.32000000000005, "text": " If you look carefully, you will also see the moment when the agent becomes certain that"}, {"start": 391.32000000000005, "end": 392.8, "text": " it will win this game."}, {"start": 392.8, "end": 397.56, "text": " I could look at this all day long and if you feel the same way, make sure to visit the"}, {"start": 397.56, "end": 398.56, "text": " video description."}, {"start": 398.56, "end": 401.24, "text": " I have a link to the source video for you."}, {"start": 401.24, "end": 410.20000000000005, "text": " The final result against TLO was 5-0."}, {"start": 410.20000000000005, "end": 415.92, "text": " I actually lost everything all the fire matches."}, {"start": 415.92, "end": 417.2, "text": " So that's something."}, {"start": 417.2, "end": 422.6, "text": " And he mentioned that Alpha Star played very much like a human does and almost always managed"}, {"start": 422.6, "end": 424.04, "text": " to outmaneuver him."}, {"start": 424.04, "end": 429.36, "text": " However, TLO also mentioned that he is confident that upon playing more training matches against"}, {"start": 429.36, "end": 432.52000000000004, "text": " these agents, he would be able to defeat the AI."}, {"start": 432.52000000000004, "end": 435.16, "text": " I hope he will be given a chance to do that."}, {"start": 435.16, "end": 438.64, "text": " This AI seems strong, but still beatable."}, {"start": 438.64, "end": 443.12, "text": " I would also note that many of you would probably expect the later versions of Alpha Star to"}, {"start": 443.12, "end": 445.44, "text": " be way better than this one."}, {"start": 445.44, "end": 450.16, "text": " The good news is that the story continues and we'll see whether that's true."}, {"start": 450.16, "end": 455.52, "text": " So at this point, the DeepMind scientists said maybe we could try to be a bit more ambitious"}, {"start": 455.52, "end": 458.76, "text": " and asked, can you bring us someone better?"}, {"start": 458.76, "end": 463.15999999999997, "text": " And in the meantime, pressed that training button on the AI again."}, {"start": 463.15999999999997, "end": 469.52, "text": " In comes mana, a top tier pro player, one of the best pro task players in the world."}, {"start": 469.52, "end": 474.32, "text": " This was a nerve-wracking moment for DeepMind scientists as well because their agents played"}, {"start": 474.32, "end": 480.0, "text": " against each other, so they only knew the AI's win rate against a different AI."}, {"start": 480.0, "end": 484.15999999999997, "text": " But they didn't know how they would compete against a top pro player."}, {"start": 484.15999999999997, "end": 486.68, "text": " It may still have holes in its strategy."}, {"start": 486.68, "end": 488.48, "text": " Who knows what would happen."}, {"start": 488.48, "end": 492.44, "text": " Understandably, they had very little confidence in winning this one."}, {"start": 492.44, "end": 498.92, "text": " What they didn't expect is that the new AI was not slightly improved or somewhat improved."}, {"start": 498.92, "end": 500.15999999999997, "text": " No, no, no, no."}, {"start": 500.15999999999997, "end": 503.24, "text": " This new AI was next level."}, {"start": 503.24, "end": 509.2, "text": " This set of improved agents, among many other skills, had incredibly crisp micromanagement"}, {"start": 509.2, "end": 511.32, "text": " of each individual unit."}, {"start": 511.32, "end": 515.4, "text": " In the first game, we've seen it pulling back injured units, but still letting them"}, {"start": 515.4, "end": 521.2, "text": " attack from afar masterfully, leading to an early win for the AI against mana in the"}, {"start": 521.2, "end": 522.2, "text": " first game."}, {"start": 522.2, "end": 526.5600000000001, "text": " He and the commentators were equally shocked by how well the agent played."}, {"start": 526.5600000000001, "end": 531.6, "text": " And I will add that I remember from watching many games from a now inactive player by the"}, {"start": 531.6, "end": 534.5600000000001, "text": " name Marine King a few years ago."}, {"start": 534.5600000000001, "end": 539.6800000000001, "text": " And I vividly remember that he played some of his games so well, the commentator said"}, {"start": 539.6800000000001, "end": 543.6800000000001, "text": " that there is no better way to put it, he played like a god."}, {"start": 543.6800000000001, "end": 549.2, "text": " I am almost afraid to say that this micromanagement was even more crisp than that."}, {"start": 549.2, "end": 551.9200000000001, "text": " This AI plays phenomenal games."}, {"start": 551.9200000000001, "end": 557.0400000000001, "text": " In later matches, the AI did things that seemed like blunders, like attacking on ramps"}, {"start": 557.04, "end": 563.7199999999999, "text": " and standing in choke points, or using unfavorable unit compositions and refusing to change it"}, {"start": 563.7199999999999, "end": 565.5999999999999, "text": " and get this."}, {"start": 565.5999999999999, "end": 571.88, "text": " It's still one all of those games 5-0 against a top pro player."}, {"start": 571.88, "end": 572.88, "text": " Let that sink in."}, {"start": 572.88, "end": 599.8, "text": " The competition was closed by a match where the AI was asked to also do the camera management."}, {"start": 599.8, "end": 605.68, "text": " The agent was still very competent, but somewhat weaker and as a result lost this game, hence"}, {"start": 605.68, "end": 609.0799999999999, "text": " the 0 or 1 part in the title."}, {"start": 609.0799999999999, "end": 613.92, "text": " My impression is that it was asked to do something that it was not designed for and expect"}, {"start": 613.92, "end": 617.28, "text": " a future version to be able to handle this use case as well."}, {"start": 617.28, "end": 622.0799999999999, "text": " I will also commend Mana for his solid game plan for this game and also huge respect"}, {"start": 622.0799999999999, "end": 624.4799999999999, "text": " for deep-mind for their sportsmanship."}, {"start": 624.4799999999999, "end": 629.7199999999999, "text": " Interestingly, in this match, Mana also started a worker over saturation strategy that"}, {"start": 629.72, "end": 631.28, "text": " I mentioned earlier."}, {"start": 631.28, "end": 634.96, "text": " This he learned from Alpha Star and used it in his winning game."}, {"start": 634.96, "end": 637.44, "text": " Isn't that amazing?"}, {"start": 637.44, "end": 642.5600000000001, "text": " Deep-mind also offered a Reddit AMA where anyone could ask them questions to make sure to"}, {"start": 642.5600000000001, "end": 644.4, "text": " clear up any confusion."}, {"start": 644.4, "end": 647.4, "text": " For instance, the actions per minute part has been addressed."}, {"start": 647.4, "end": 650.9200000000001, "text": " I've included a link to that for you in the video description."}, {"start": 650.9200000000001, "end": 656.44, "text": " To go from a turn-based, perfect information game, like Go, to a real-time strategy game"}, {"start": 656.44, "end": 662.2800000000001, "text": " of imperfect information in about a year sounds like science fiction to me."}, {"start": 662.2800000000001, "end": 664.12, "text": " And yet, here it is."}, {"start": 664.12, "end": 669.44, "text": " Also, note that Deep-mind's goal is not to create a godlike Starcraft II AI."}, {"start": 669.44, "end": 674.72, "text": " They want to solve intelligence, not Starcraft II, and they use this game as a vehicle to"}, {"start": 674.72, "end": 679.84, "text": " demonstrate its long-term decision-making capabilities against human players."}, {"start": 679.84, "end": 684.24, "text": " One more important thing to emphasize is that the building blocks of Alpha Star are meant"}, {"start": 684.24, "end": 690.84, "text": " to be reasonably general AI algorithms, which means that parts of this AI can be reused"}, {"start": 690.84, "end": 692.32, "text": " for other things."}, {"start": 692.32, "end": 698.52, "text": " For instance, Demis Asabi mentioned weather prediction and climate modeling as examples."}, {"start": 698.52, "end": 702.5600000000001, "text": " If you take only one thought from this video, let it be this one."}, {"start": 702.5600000000001, "end": 707.24, "text": " I urge you to watch all the matches because what you are witnessing may very well be history"}, {"start": 707.24, "end": 708.5600000000001, "text": " in the making."}, {"start": 708.5600000000001, "end": 714.12, "text": " I put a link to the whole event in the video description plus plenty more materials, including"}, {"start": 714.12, "end": 720.0, "text": " other people's analysis, Manus' personal experience of the event, his breakdown of his games,"}, {"start": 720.0, "end": 722.84, "text": " and what was going through his head during the event."}, {"start": 722.84, "end": 727.32, "text": " I highly recommend checking out his fifth game, but really, go through them all."}, {"start": 727.32, "end": 728.72, "text": " It's a ton of fun."}, {"start": 728.72, "end": 733.32, "text": " I made sure to include a more skeptical analysis of the game as well to give you a balanced"}, {"start": 733.32, "end": 735.04, "text": " portfolio of insights."}, {"start": 735.04, "end": 740.08, "text": " Also, huge respect for Deep-mind and the players who practice their chops for many, many years"}, {"start": 740.08, "end": 743.28, "text": " and have played really well under immense pressure."}, {"start": 743.28, "end": 745.28, "text": " Thank you all for this delightful event."}, {"start": 745.28, "end": 746.9599999999999, "text": " It really made my day."}, {"start": 746.9599999999999, "end": 751.24, "text": " And the ultimate question is, how long did it take to train these agents?"}, {"start": 751.24, "end": 752.24, "text": " Two weeks."}, {"start": 752.24, "end": 753.24, "text": " Wow."}, {"start": 753.24, "end": 758.16, "text": " And what's more, after the training step, the AI can be deployed on an inexpensive consumer"}, {"start": 758.16, "end": 759.52, "text": " desktop machine."}, {"start": 759.52, "end": 761.9599999999999, "text": " And this is only the first version."}, {"start": 761.9599999999999, "end": 766.28, "text": " This is just a taste, and it would be hard to overstate how big of a milestone this"}, {"start": 766.28, "end": 767.28, "text": " is."}, {"start": 767.28, "end": 772.3199999999999, "text": " And now, scientists that Deep-mind have sufficient data to calculate the amount of resources"}, {"start": 772.32, "end": 776.36, "text": " they need to spend to train the next even more improved agents."}, {"start": 776.36, "end": 780.6, "text": " I am confident that they will also take into consideration the feedback from the Starcraft"}, {"start": 780.6, "end": 783.48, "text": " community when creating this next version."}, {"start": 783.48, "end": 785.32, "text": " What a time to be alive."}, {"start": 785.32, "end": 787.0400000000001, "text": " What do you think about all this?"}, {"start": 787.0400000000001, "end": 788.0400000000001, "text": " Any predictions?"}, {"start": 788.0400000000001, "end": 790.48, "text": " Is this harder than Dota 2?"}, {"start": 790.48, "end": 792.44, "text": " Let me know in the comments section below."}, {"start": 792.44, "end": 798.24, "text": " And remember, we humans build up new strategies by learning from each other, and of course,"}, {"start": 798.24, "end": 802.6, "text": " the AI, as you have seen here, doesn't care about any of that."}, {"start": 802.6, "end": 806.96, "text": " It doesn't need intuition and can come up with unusual strategies."}, {"start": 806.96, "end": 812.28, "text": " The difference now is that these strategies work against some of the best human players."}, {"start": 812.28, "end": 816.08, "text": " Now it's time for us to finally start learning from an AI."}, {"start": 816.08, "end": 817.08, "text": " GG."}, {"start": 817.08, "end": 829.1600000000001, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Do_00r8NGMY
AI Learns Real-Time Defocus Effects in VR
The paper "DeepFocus: Learned Image Synthesis for Computational Displays" and its source code is available here: https://research.fb.com/publications/deepfocus-siggraph-asia-2018/ https://www.oculus.com/blog/introducing-deepfocus-the-ai-rendering-system-powering-half-dome/ https://github.com/facebookresearch/DeepFocus Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Dominic Papers with Károly Zsolnai-Fehér. If we are to write a sophisticated light simulation program and we write a list of features that we really wish to have, we should definitely keep an eye on the focus effects. This is what it looks like and in order to do that, our simulation program has to take into consideration the geometry and thickness of the lenses within our virtual camera and even though it looks absolutely amazing, it is very costly to simulate that properly. This particular technique attempts to do this in real time and for specialized display types, typically once that are found in head-mounted displays for virtual reality applications. So here we go. Due to popular requests, a little VR in two-minute papers. In virtual reality, defocus effects are especially important because they mimic how the human visual system works. Only a tiny region that we are focusing on looks sharp and everything else should be blurry, but not any kind of blurry. It has to look physically plausible. If we can pull this off just right, we'll get a great and immersive VR experience. The heart of this problem is looking at a 2D image and being able to estimate how far away different objects are from the camera lens. This is a task that is relatively easy for humans because we have an intuitive understanding of depth and geometry, but of course this is no easy task for a machine. To accomplish this, here a convolution on your own network is used and our seasoned fellow scholars know that this means that we need a ton of training data. The input should be a bunch of images and their corresponding depth maps for the neural network to learn from. The authors implemented this with a random scene generator which creates a bunch of these crazy scenes with a lot of occlusions and computes via simulation the appropriate depth map for them. On the right you see these depth maps or in other words images that describe to the computer how far away these objects are. The incredible thing is that the neural network was able to learn the concept of occlusions and was able to create super high quality defocus effects. Not only that but this technique can also be reconfigured to fit different use cases. If we are okay with spending up to 50 milliseconds to render an image which is 20 frames per second we can get super high quality images or if we only have a budget of 5 milliseconds per image which is 200 frames per second we can do that and the quality of the outputs degrades just a tiny bit. While we are talking about image quality let's have a closer look at the paper where we see a ton of comparisons against previous works and of course against the baseline ground truth knowledge. You see two metrics here PSNR which is the peak signal to noise ratio and SSIM the structural similarity metric. In this case both are used to measure how close the output of these techniques is to the ground truth footage. Both are subject to maximization. For instance here you see that the second best technique has a peak signal to noise ratio of around 40 and this new method scores 45. Well some may think that's just a 12 percent difference right? No. Note that PSNR works on a logarithmic scale which means that even a tiny difference in numbers translates to a huge difference in terms of visuals. You can see in the close-ups that the output of this new method is close to indistinguishable from the ground truth. A neural network that successfully learned the concept of occlusions and depth by looking at random scenes. Bravo! As virtual reality applications are under rise these days this technique will be useful to provide a more immersive experience for the users and to make sure that this method sees more widespread use. The authors also made the source code and the training data sets available for everyone free of charge so make sure to have a look at that and run your own experiments if you're interested. I'll be doing that in the meantime. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Dominic Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.4, "end": 9.92, "text": " If we are to write a sophisticated light simulation program and we write a list of features"}, {"start": 9.92, "end": 15.280000000000001, "text": " that we really wish to have, we should definitely keep an eye on the focus effects."}, {"start": 15.280000000000001, "end": 20.0, "text": " This is what it looks like and in order to do that, our simulation program has to take"}, {"start": 20.0, "end": 26.16, "text": " into consideration the geometry and thickness of the lenses within our virtual camera and"}, {"start": 26.16, "end": 31.52, "text": " even though it looks absolutely amazing, it is very costly to simulate that properly."}, {"start": 31.52, "end": 37.84, "text": " This particular technique attempts to do this in real time and for specialized display types,"}, {"start": 37.84, "end": 43.84, "text": " typically once that are found in head-mounted displays for virtual reality applications."}, {"start": 43.84, "end": 50.400000000000006, "text": " So here we go. Due to popular requests, a little VR in two-minute papers. In virtual reality,"}, {"start": 50.4, "end": 56.72, "text": " defocus effects are especially important because they mimic how the human visual system works."}, {"start": 56.72, "end": 62.56, "text": " Only a tiny region that we are focusing on looks sharp and everything else should be blurry,"}, {"start": 62.56, "end": 66.96, "text": " but not any kind of blurry. It has to look physically plausible."}, {"start": 66.96, "end": 72.16, "text": " If we can pull this off just right, we'll get a great and immersive VR experience."}, {"start": 72.16, "end": 77.2, "text": " The heart of this problem is looking at a 2D image and being able to estimate"}, {"start": 77.2, "end": 81.28, "text": " how far away different objects are from the camera lens."}, {"start": 81.28, "end": 87.36, "text": " This is a task that is relatively easy for humans because we have an intuitive understanding of depth"}, {"start": 87.36, "end": 93.76, "text": " and geometry, but of course this is no easy task for a machine. To accomplish this, here a"}, {"start": 93.76, "end": 98.88, "text": " convolution on your own network is used and our seasoned fellow scholars know that this means"}, {"start": 98.88, "end": 104.24000000000001, "text": " that we need a ton of training data. The input should be a bunch of images and their"}, {"start": 104.24, "end": 109.75999999999999, "text": " corresponding depth maps for the neural network to learn from. The authors implemented this with"}, {"start": 109.75999999999999, "end": 115.75999999999999, "text": " a random scene generator which creates a bunch of these crazy scenes with a lot of occlusions"}, {"start": 115.75999999999999, "end": 120.8, "text": " and computes via simulation the appropriate depth map for them. On the right you see these"}, {"start": 120.8, "end": 127.36, "text": " depth maps or in other words images that describe to the computer how far away these objects are."}, {"start": 127.36, "end": 133.04, "text": " The incredible thing is that the neural network was able to learn the concept of occlusions"}, {"start": 133.04, "end": 139.28, "text": " and was able to create super high quality defocus effects. Not only that but this technique can also"}, {"start": 139.28, "end": 146.07999999999998, "text": " be reconfigured to fit different use cases. If we are okay with spending up to 50 milliseconds to"}, {"start": 146.07999999999998, "end": 152.95999999999998, "text": " render an image which is 20 frames per second we can get super high quality images or if we only"}, {"start": 152.95999999999998, "end": 158.72, "text": " have a budget of 5 milliseconds per image which is 200 frames per second we can do that and the"}, {"start": 158.72, "end": 165.2, "text": " quality of the outputs degrades just a tiny bit. While we are talking about image quality let's"}, {"start": 165.2, "end": 170.96, "text": " have a closer look at the paper where we see a ton of comparisons against previous works and of"}, {"start": 170.96, "end": 178.0, "text": " course against the baseline ground truth knowledge. You see two metrics here PSNR which is the peak"}, {"start": 178.0, "end": 184.96, "text": " signal to noise ratio and SSIM the structural similarity metric. In this case both are used to"}, {"start": 184.96, "end": 190.96, "text": " measure how close the output of these techniques is to the ground truth footage. Both are subject"}, {"start": 190.96, "end": 196.48000000000002, "text": " to maximization. For instance here you see that the second best technique has a peak signal to"}, {"start": 196.48000000000002, "end": 203.84, "text": " noise ratio of around 40 and this new method scores 45. Well some may think that's just a 12"}, {"start": 203.84, "end": 211.92000000000002, "text": " percent difference right? No. Note that PSNR works on a logarithmic scale which means that even a"}, {"start": 211.92, "end": 217.44, "text": " tiny difference in numbers translates to a huge difference in terms of visuals. You can see in"}, {"start": 217.44, "end": 223.76, "text": " the close-ups that the output of this new method is close to indistinguishable from the ground truth."}, {"start": 223.76, "end": 229.51999999999998, "text": " A neural network that successfully learned the concept of occlusions and depth by looking at"}, {"start": 229.51999999999998, "end": 236.16, "text": " random scenes. Bravo! As virtual reality applications are under rise these days this technique will"}, {"start": 236.16, "end": 241.83999999999997, "text": " be useful to provide a more immersive experience for the users and to make sure that this method sees"}, {"start": 241.84, "end": 247.36, "text": " more widespread use. The authors also made the source code and the training data sets available"}, {"start": 247.36, "end": 252.48000000000002, "text": " for everyone free of charge so make sure to have a look at that and run your own experiments if"}, {"start": 252.48000000000002, "end": 257.04, "text": " you're interested. I'll be doing that in the meantime. Thanks for watching and for your generous"}, {"start": 257.04, "end": 273.6, "text": " support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=-cOYwZ2XcAc
None of These Faces Are Real!
The paper "A Style-Based Generator Architecture for Generative Adversarial Networks", i.e., #StyleGAN and its video available here: https://arxiv.org/abs/1812.04948 https://www.youtube.com/watch?v=kSLJriaOumA Our material synthesis paper is available here: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #NVIDIA
Dear Fellow Scholars, this is two-minute papers with Karo Jona Ifehir. Before we start, I will tell you right away to hold on to your papers. When I first seen the results, I didn't do that and almost fell out of the chair. Scientists at NVIDIA published an amazing work not so long ago that was able to dream up high-resolution images of imaginary celebrities. It was a progressive technique which means that it started out with a low-fidelity image and kept refining it and over time we found ourselves with high-quality images of people that don't exist. We also discussed in the previous episode that the algorithm is able to learn the properties and features of a human face and come up with truly novel human beings. There is true learning happening here, not just copying the training set for these neural networks. This is an absolutely stellar research work and for a moment let's imagine that we are the art directors of a movie or a computer game where we require that the algorithm synthesizes more human faces for us. Whenever I worked with artists in the industry, I've learned that what artists often look for beyond realism is control. Artists seek to conjure up new worlds and those new worlds require consistency and artistic direction to suspend our disbelief. So here's a new piece of work from Nvidia with some killer new features to address this. Killer feature number one. It can combine different aspects of these images. Let's have a look at an example over here. The images above are the inputs and we can lock in several aspects of these images, for instance like gender, age, pose and more. One would take a different image, this will be the other source image and the output is these two images fused together, almost like star transfer or feature transfer for human faces. As a result we are able to generate high fidelity images of human faces that are incredibly lifelike and of course none of these faces are real. How cool is that? Absolutely amazing. Killer feature number two. We can also vary these parameters one by one and this way we have a more fine grained artistic control over the outputs. Killer feature number three. It can also perform interpolation which means that we have desirable images A and B and this would create intermediate images between them. As always with this the whole eGrea problem is that each of the intermediate images have to make sense and be realistic. And just look at this. It can morph one gender into the other, blend hairstyles, colors and in the meantime the facial gestures remain crisp and realistic. I am out of words. This is absolutely incredible. It kind of works on other datasets, for instance cars, bedrooms and of course you guessed it right. Cats. Now interestingly it also varies the background behind the characters which is a hallmark of latent space based techniques. I wonder if and how this will be solved over time. We also published a paper not so long ago that was about using learning algorithms to synthesize not human faces but photorealistic materials. We introduced a neural renderer that was able to perform a specialized version of a light transport simulation in real time as well. However, in the paper we noted that the resolution of the output images is limited by the onboard video memory on the graphics card that is being used and should improve over time as new graphics cards are developed with more memory. And get this, a few days ago, Fox at NVIDIA reached out and said that they just released an amazing new graphics card, the Titan RTX which has a ton of onboard memory and would be happy to send over one of those. Now we can improve our work further. A huge thank you for them for being so thoughtful and therefore this episode has been kindly sponsored by NVIDIA. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Jona Ifehir."}, {"start": 4.6000000000000005, "end": 8.96, "text": " Before we start, I will tell you right away to hold on to your papers."}, {"start": 8.96, "end": 14.200000000000001, "text": " When I first seen the results, I didn't do that and almost fell out of the chair."}, {"start": 14.200000000000001, "end": 19.36, "text": " Scientists at NVIDIA published an amazing work not so long ago that was able to dream up"}, {"start": 19.36, "end": 22.92, "text": " high-resolution images of imaginary celebrities."}, {"start": 22.92, "end": 28.2, "text": " It was a progressive technique which means that it started out with a low-fidelity image"}, {"start": 28.2, "end": 34.96, "text": " and kept refining it and over time we found ourselves with high-quality images of people"}, {"start": 34.96, "end": 36.8, "text": " that don't exist."}, {"start": 36.8, "end": 42.16, "text": " We also discussed in the previous episode that the algorithm is able to learn the properties"}, {"start": 42.16, "end": 47.68, "text": " and features of a human face and come up with truly novel human beings."}, {"start": 47.68, "end": 52.239999999999995, "text": " There is true learning happening here, not just copying the training set for these neural"}, {"start": 52.239999999999995, "end": 53.239999999999995, "text": " networks."}, {"start": 53.24, "end": 58.760000000000005, "text": " This is an absolutely stellar research work and for a moment let's imagine that we are"}, {"start": 58.760000000000005, "end": 65.32000000000001, "text": " the art directors of a movie or a computer game where we require that the algorithm synthesizes"}, {"start": 65.32000000000001, "end": 67.24000000000001, "text": " more human faces for us."}, {"start": 67.24000000000001, "end": 72.08, "text": " Whenever I worked with artists in the industry, I've learned that what artists often look"}, {"start": 72.08, "end": 76.04, "text": " for beyond realism is control."}, {"start": 76.04, "end": 82.04, "text": " Artists seek to conjure up new worlds and those new worlds require consistency and artistic"}, {"start": 82.04, "end": 84.96000000000001, "text": " direction to suspend our disbelief."}, {"start": 84.96000000000001, "end": 90.72, "text": " So here's a new piece of work from Nvidia with some killer new features to address this."}, {"start": 90.72, "end": 92.52000000000001, "text": " Killer feature number one."}, {"start": 92.52000000000001, "end": 96.0, "text": " It can combine different aspects of these images."}, {"start": 96.0, "end": 98.2, "text": " Let's have a look at an example over here."}, {"start": 98.2, "end": 104.76, "text": " The images above are the inputs and we can lock in several aspects of these images, for"}, {"start": 104.76, "end": 109.52000000000001, "text": " instance like gender, age, pose and more."}, {"start": 109.52, "end": 114.72, "text": " One would take a different image, this will be the other source image and the output is"}, {"start": 114.72, "end": 121.52, "text": " these two images fused together, almost like star transfer or feature transfer for human"}, {"start": 121.52, "end": 122.67999999999999, "text": " faces."}, {"start": 122.67999999999999, "end": 128.6, "text": " As a result we are able to generate high fidelity images of human faces that are incredibly"}, {"start": 128.6, "end": 133.28, "text": " lifelike and of course none of these faces are real."}, {"start": 133.28, "end": 135.68, "text": " How cool is that?"}, {"start": 135.68, "end": 137.24, "text": " Absolutely amazing."}, {"start": 137.24, "end": 139.2, "text": " Killer feature number two."}, {"start": 139.2, "end": 144.28, "text": " We can also vary these parameters one by one and this way we have a more fine grained"}, {"start": 144.28, "end": 155.04, "text": " artistic control over the outputs."}, {"start": 155.04, "end": 156.95999999999998, "text": " Killer feature number three."}, {"start": 156.95999999999998, "end": 163.28, "text": " It can also perform interpolation which means that we have desirable images A and B and"}, {"start": 163.28, "end": 166.79999999999998, "text": " this would create intermediate images between them."}, {"start": 166.8, "end": 171.92000000000002, "text": " As always with this the whole eGrea problem is that each of the intermediate images have"}, {"start": 171.92000000000002, "end": 175.60000000000002, "text": " to make sense and be realistic."}, {"start": 175.60000000000002, "end": 177.4, "text": " And just look at this."}, {"start": 177.4, "end": 184.12, "text": " It can morph one gender into the other, blend hairstyles, colors and in the meantime the"}, {"start": 184.12, "end": 187.52, "text": " facial gestures remain crisp and realistic."}, {"start": 187.52, "end": 189.08, "text": " I am out of words."}, {"start": 189.08, "end": 191.52, "text": " This is absolutely incredible."}, {"start": 191.52, "end": 198.28, "text": " It kind of works on other datasets, for instance cars, bedrooms and of course you guessed it"}, {"start": 198.28, "end": 199.28, "text": " right."}, {"start": 199.28, "end": 201.48000000000002, "text": " Cats."}, {"start": 201.48000000000002, "end": 209.64000000000001, "text": " Now interestingly it also varies the background behind the characters which is a hallmark"}, {"start": 209.64000000000001, "end": 211.68, "text": " of latent space based techniques."}, {"start": 211.68, "end": 215.44, "text": " I wonder if and how this will be solved over time."}, {"start": 215.44, "end": 221.28, "text": " We also published a paper not so long ago that was about using learning algorithms to synthesize"}, {"start": 221.28, "end": 225.36, "text": " not human faces but photorealistic materials."}, {"start": 225.36, "end": 230.6, "text": " We introduced a neural renderer that was able to perform a specialized version of a light"}, {"start": 230.6, "end": 233.76, "text": " transport simulation in real time as well."}, {"start": 233.76, "end": 239.72, "text": " However, in the paper we noted that the resolution of the output images is limited by the onboard"}, {"start": 239.72, "end": 245.2, "text": " video memory on the graphics card that is being used and should improve over time as new"}, {"start": 245.2, "end": 248.28, "text": " graphics cards are developed with more memory."}, {"start": 248.28, "end": 254.16, "text": " And get this, a few days ago, Fox at NVIDIA reached out and said that they just released"}, {"start": 254.16, "end": 260.88, "text": " an amazing new graphics card, the Titan RTX which has a ton of onboard memory and would"}, {"start": 260.88, "end": 263.2, "text": " be happy to send over one of those."}, {"start": 263.2, "end": 265.16, "text": " Now we can improve our work further."}, {"start": 265.16, "end": 270.0, "text": " A huge thank you for them for being so thoughtful and therefore this episode has been kindly"}, {"start": 270.0, "end": 271.72, "text": " sponsored by NVIDIA."}, {"start": 271.72, "end": 278.72, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=1ct_P3IZow0
What Makes a Good Image Generator AI?
Three paper recommendations this time: - Inception score - "Improved Techniques for Training GANs" - https://arxiv.org/abs/1606.03498 - "Progressive Growing of GANs for Improved Quality" - https://arxiv.org/abs/1710.10196 - Inception score criticism - "A Note on the Inception Score" - https://arxiv.org/abs/1801.01973 Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-3840163/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. In this series, we frequently talk about generative adversarial networks or GANs in short. This means a pair of neural networks that battle each other over time to master a task, for instance, to generate realistic-looking images from a written description. Here you see Nvidia's amazing work that was able to dream up high-resolution images of imaginary celebrities. In the next episode, we will talk some more about their newest work that does something like this and is even better at it, believe it or not. I hope you have subscribed to the channel to make sure not to miss out on that one. And for now, while we marvel at these outstanding results, I will quickly tell you about overfitting and what it has to do with images of celebrities. When we train a neural network, we wish to make sure that it understands the concepts we are trying to teach it. Typically, we feed it a database of labeled images where the labels mean that this depicts a dog and this one is not a dog but a cat. After the training step took place, in the ideal case, it will be able to build an understanding of these images so that when we show them new, previously unseen images, it would be able to correctly guess which animals they depict. However, in many cases, we start training the neural network and during the training, it gives us wonderful results and it gets the animals right every single time. But whenever it sees new, previously unseen images, it can tell a dog from a cat at all. This peculiar case is what we call overfitting and this is the bane of machine learning research. Overfitting is like the kind of student we all encounter that school who is always very good at memorizing the textbook but can solve even the simplest new problems on the exam. This is not learning, this is memorization. Overfitting means that a neural network does not learn the concept of dogs or cats, it just tries to memorize this database of images and is able to regurgitate it for us but this knowledge cannot generalize for new images. That's not good. I want intelligence, not a copying machine. So at this point, it is probably clearer what images of celebrities have to do with overfitting. So how do we know that this algorithm doesn't just memorize the celebrity image dataset it was given and can really generate new imaginary people? Is it the good kind of student or the lousy student? Technique number one. Let's not just dream up images of new celebrities but also visualize images from the training data that are similar to this image. If they are too similar, we have an overfitting problem. Let's have a look. Now it is easy to see that this is proper intelligence and not a copying machine because it was clearly able to learn the facial features of these people and combine them in novel ways. This is what scientists at NVIDIA did in their paper and are to be commanded for that. Technique number two, well, just take a bunch of humans and let them decide whether these images differ from the training set and if they are realistic. This kind of works but of course costs quite a bit of money, labor and we end up with something subjective. We better not compare the quality of research papers based on that if we can avoid it. And get this, we can actually avoid it by using something called the Inception Score. Instead of using humans, this score uses a neural network to have a look at these images and measure the quality and the diversity of the results. As long as the image produces similar neural activations within this neural network, two images will be deemed to be similar. Finally, this score is an objective way of measuring progress within this field and it is of course subject to maximization. So now you of course wish to know what the state of the artist today. For reference, a set of real images has an Inception Score of 233 and the best works that produced synthetic images just a few years ago had a score of around 50. To the best of my knowledge, as of the publishing of this video, the highest Inception Score for an AI is close to 166, so we've come a long, long way. You can see some of these images here. Truly exciting. What a time to be alive. The disadvantages of the method is that one, because the diversity of the outputs is also to be measured, it requires many thousands of images. This is likely more of an issue with the problem definition itself and not this method and also since this means that the computers and not real people have to do the work, we can give this one a pass. This advantage number two, I will include this paper in the video description for you, which basically describes that there are cases where it is possible to get the network to think an image is of higher quality than another one, even if it clearly isn't. Now you see that we have pretty ingenious techniques to measure the quality of image generator AI programs, and of course this area of research is also subject to improvement and I'll be here to tell you about it. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.5600000000000005, "end": 10.96, "text": " In this series, we frequently talk about generative adversarial networks or GANs in short."}, {"start": 10.96, "end": 16.28, "text": " This means a pair of neural networks that battle each other over time to master a task,"}, {"start": 16.28, "end": 21.28, "text": " for instance, to generate realistic-looking images from a written description."}, {"start": 21.28, "end": 26.96, "text": " Here you see Nvidia's amazing work that was able to dream up high-resolution images of"}, {"start": 26.96, "end": 29.16, "text": " imaginary celebrities."}, {"start": 29.16, "end": 33.88, "text": " In the next episode, we will talk some more about their newest work that does something"}, {"start": 33.88, "end": 37.92, "text": " like this and is even better at it, believe it or not."}, {"start": 37.92, "end": 42.36, "text": " I hope you have subscribed to the channel to make sure not to miss out on that one."}, {"start": 42.36, "end": 48.32, "text": " And for now, while we marvel at these outstanding results, I will quickly tell you about overfitting"}, {"start": 48.32, "end": 51.96, "text": " and what it has to do with images of celebrities."}, {"start": 51.96, "end": 56.92, "text": " When we train a neural network, we wish to make sure that it understands the concepts"}, {"start": 56.92, "end": 58.64, "text": " we are trying to teach it."}, {"start": 58.64, "end": 64.24, "text": " Typically, we feed it a database of labeled images where the labels mean that this depicts"}, {"start": 64.24, "end": 68.04, "text": " a dog and this one is not a dog but a cat."}, {"start": 68.04, "end": 73.52, "text": " After the training step took place, in the ideal case, it will be able to build an understanding"}, {"start": 73.52, "end": 78.76, "text": " of these images so that when we show them new, previously unseen images, it would be able"}, {"start": 78.76, "end": 82.28, "text": " to correctly guess which animals they depict."}, {"start": 82.28, "end": 87.96000000000001, "text": " However, in many cases, we start training the neural network and during the training,"}, {"start": 87.96, "end": 93.63999999999999, "text": " it gives us wonderful results and it gets the animals right every single time."}, {"start": 93.63999999999999, "end": 100.91999999999999, "text": " But whenever it sees new, previously unseen images, it can tell a dog from a cat at all."}, {"start": 100.91999999999999, "end": 105.63999999999999, "text": " This peculiar case is what we call overfitting and this is the bane of machine learning"}, {"start": 105.63999999999999, "end": 107.03999999999999, "text": " research."}, {"start": 107.03999999999999, "end": 112.08, "text": " Overfitting is like the kind of student we all encounter that school who is always very"}, {"start": 112.08, "end": 118.67999999999999, "text": " good at memorizing the textbook but can solve even the simplest new problems on the exam."}, {"start": 118.67999999999999, "end": 122.36, "text": " This is not learning, this is memorization."}, {"start": 122.36, "end": 127.0, "text": " Overfitting means that a neural network does not learn the concept of dogs or cats, it"}, {"start": 127.0, "end": 132.6, "text": " just tries to memorize this database of images and is able to regurgitate it for us but"}, {"start": 132.6, "end": 135.92, "text": " this knowledge cannot generalize for new images."}, {"start": 135.92, "end": 137.4, "text": " That's not good."}, {"start": 137.4, "end": 140.92, "text": " I want intelligence, not a copying machine."}, {"start": 140.92, "end": 146.76, "text": " So at this point, it is probably clearer what images of celebrities have to do with overfitting."}, {"start": 146.76, "end": 152.51999999999998, "text": " So how do we know that this algorithm doesn't just memorize the celebrity image dataset"}, {"start": 152.51999999999998, "end": 157.16, "text": " it was given and can really generate new imaginary people?"}, {"start": 157.16, "end": 161.39999999999998, "text": " Is it the good kind of student or the lousy student?"}, {"start": 161.39999999999998, "end": 162.72, "text": " Technique number one."}, {"start": 162.72, "end": 168.35999999999999, "text": " Let's not just dream up images of new celebrities but also visualize images from the training"}, {"start": 168.36, "end": 171.24, "text": " data that are similar to this image."}, {"start": 171.24, "end": 174.68, "text": " If they are too similar, we have an overfitting problem."}, {"start": 174.68, "end": 176.0, "text": " Let's have a look."}, {"start": 176.0, "end": 181.56, "text": " Now it is easy to see that this is proper intelligence and not a copying machine because"}, {"start": 181.56, "end": 188.68, "text": " it was clearly able to learn the facial features of these people and combine them in novel ways."}, {"start": 188.68, "end": 194.32000000000002, "text": " This is what scientists at NVIDIA did in their paper and are to be commanded for that."}, {"start": 194.32, "end": 200.28, "text": " Technique number two, well, just take a bunch of humans and let them decide whether these"}, {"start": 200.28, "end": 205.35999999999999, "text": " images differ from the training set and if they are realistic."}, {"start": 205.35999999999999, "end": 211.56, "text": " This kind of works but of course costs quite a bit of money, labor and we end up with"}, {"start": 211.56, "end": 213.35999999999999, "text": " something subjective."}, {"start": 213.35999999999999, "end": 218.35999999999999, "text": " We better not compare the quality of research papers based on that if we can avoid it."}, {"start": 218.36, "end": 225.04000000000002, "text": " And get this, we can actually avoid it by using something called the Inception Score."}, {"start": 225.04000000000002, "end": 230.20000000000002, "text": " Instead of using humans, this score uses a neural network to have a look at these images"}, {"start": 230.20000000000002, "end": 234.60000000000002, "text": " and measure the quality and the diversity of the results."}, {"start": 234.60000000000002, "end": 239.44000000000003, "text": " As long as the image produces similar neural activations within this neural network, two"}, {"start": 239.44000000000003, "end": 242.16000000000003, "text": " images will be deemed to be similar."}, {"start": 242.16000000000003, "end": 247.32000000000002, "text": " Finally, this score is an objective way of measuring progress within this field and it"}, {"start": 247.32, "end": 250.51999999999998, "text": " is of course subject to maximization."}, {"start": 250.51999999999998, "end": 255.56, "text": " So now you of course wish to know what the state of the artist today."}, {"start": 255.56, "end": 262.24, "text": " For reference, a set of real images has an Inception Score of 233 and the best works that"}, {"start": 262.24, "end": 268.2, "text": " produced synthetic images just a few years ago had a score of around 50."}, {"start": 268.2, "end": 273.12, "text": " To the best of my knowledge, as of the publishing of this video, the highest Inception Score"}, {"start": 273.12, "end": 279.8, "text": " for an AI is close to 166, so we've come a long, long way."}, {"start": 279.8, "end": 282.32, "text": " You can see some of these images here."}, {"start": 282.32, "end": 283.64, "text": " Truly exciting."}, {"start": 283.64, "end": 285.96, "text": " What a time to be alive."}, {"start": 285.96, "end": 291.2, "text": " The disadvantages of the method is that one, because the diversity of the outputs is also"}, {"start": 291.2, "end": 295.6, "text": " to be measured, it requires many thousands of images."}, {"start": 295.6, "end": 300.4, "text": " This is likely more of an issue with the problem definition itself and not this method and"}, {"start": 300.4, "end": 305.64, "text": " also since this means that the computers and not real people have to do the work, we"}, {"start": 305.64, "end": 307.32, "text": " can give this one a pass."}, {"start": 307.32, "end": 311.84, "text": " This advantage number two, I will include this paper in the video description for you,"}, {"start": 311.84, "end": 316.67999999999995, "text": " which basically describes that there are cases where it is possible to get the network"}, {"start": 316.67999999999995, "end": 323.03999999999996, "text": " to think an image is of higher quality than another one, even if it clearly isn't."}, {"start": 323.03999999999996, "end": 327.64, "text": " Now you see that we have pretty ingenious techniques to measure the quality of image"}, {"start": 327.64, "end": 333.64, "text": " generator AI programs, and of course this area of research is also subject to improvement"}, {"start": 333.64, "end": 336.28, "text": " and I'll be here to tell you about it."}, {"start": 336.28, "end": 365.84, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=t_7qpPOmsME
This AI Produces Binaural (2.5D) Audio
The paper "2.5D Visual Sound" is available here: https://arxiv.org/abs/1812.04204 Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Carlos Joel Naifahir. Bainaro, or two and a half-day audio, means a sound recording that provides the listener with an amazing 3D-ish sound sensation. It produces a sound that feels highly realistic when listened to through headphones, and therefore, using a pair is highly recommended for this episode. It sounds way more immersive than regular mono, or even stereo audio signals, but also requires more expertise to produce, and is therefore quite scarce on the internet. Let's listen to the difference together. We have not only heard sound samples here, but you could also see the accompanying video content, which reveals the position of the players and the composition of the scene in which the recording is made. This sounds like a perfect fit for an AI to take a piece of mono audio and use this additional information to convert it to make it sound bainaro. This project is exactly about that, where a deep convolution on your own network is used to look at both the video and the single channel audio content in our footage, and then predict what it would have sounded like where it recorded as a bainaro signal. The fact that we can use the visual content as well as the audio with this neural network also enables us to separate the sound of an instrument within the mix. Let's listen. To validate the results, the authors both used a quantitative mathematical way of comparing their results to the ground truth and not only that, but they also carried out two user studies as well. In the first one, the ground truth was shown to the users and they were asked to judge which of the two techniques were better. In this study, this new method performed better than previous methods and in the second setup, users were asked to name the directions they hear the different instrument sounds coming from. In this case, the new method outperformed the previous techniques by a significant margin, and if we keep progressing like this, we may be at most a couple papers away from two and a half the audio synthesis that sounds indistinguishable from the real deal. Looking forward to a future where we can enjoy all kinds of video content with this kind of immersion. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.2, "text": " Dear Fellow Scholars, this is two-minute papers with Carlos Joel Naifahir."}, {"start": 4.2, "end": 9.88, "text": " Bainaro, or two and a half-day audio, means a sound recording that provides the listener"}, {"start": 9.88, "end": 13.280000000000001, "text": " with an amazing 3D-ish sound sensation."}, {"start": 13.280000000000001, "end": 18.96, "text": " It produces a sound that feels highly realistic when listened to through headphones, and therefore,"}, {"start": 18.96, "end": 22.12, "text": " using a pair is highly recommended for this episode."}, {"start": 22.12, "end": 27.84, "text": " It sounds way more immersive than regular mono, or even stereo audio signals, but also"}, {"start": 27.84, "end": 33.28, "text": " requires more expertise to produce, and is therefore quite scarce on the internet."}, {"start": 33.28, "end": 53.8, "text": " Let's listen to the difference together."}, {"start": 53.8, "end": 58.4, "text": " We have not only heard sound samples here, but you could also see the accompanying video"}, {"start": 58.4, "end": 63.36, "text": " content, which reveals the position of the players and the composition of the scene in"}, {"start": 63.36, "end": 65.6, "text": " which the recording is made."}, {"start": 65.6, "end": 71.36, "text": " This sounds like a perfect fit for an AI to take a piece of mono audio and use this"}, {"start": 71.36, "end": 75.64, "text": " additional information to convert it to make it sound bainaro."}, {"start": 75.64, "end": 80.47999999999999, "text": " This project is exactly about that, where a deep convolution on your own network is used"}, {"start": 80.48, "end": 87.04, "text": " to look at both the video and the single channel audio content in our footage, and then predict"}, {"start": 87.04, "end": 117.0, "text": " what it would have sounded like where it recorded as a bainaro signal."}, {"start": 148.04, "end": 153.6, "text": " The fact that we can use the visual content as well as the audio with this neural network"}, {"start": 153.6, "end": 158.39999999999998, "text": " also enables us to separate the sound of an instrument within the mix."}, {"start": 158.4, "end": 179.44, "text": " Let's listen."}, {"start": 179.44, "end": 185.04000000000002, "text": " To validate the results, the authors both used a quantitative mathematical way of comparing"}, {"start": 185.04, "end": 190.32, "text": " their results to the ground truth and not only that, but they also carried out two user"}, {"start": 190.32, "end": 192.07999999999998, "text": " studies as well."}, {"start": 192.07999999999998, "end": 197.64, "text": " In the first one, the ground truth was shown to the users and they were asked to judge which"}, {"start": 197.64, "end": 199.88, "text": " of the two techniques were better."}, {"start": 199.88, "end": 206.12, "text": " In this study, this new method performed better than previous methods and in the second setup,"}, {"start": 206.12, "end": 210.68, "text": " users were asked to name the directions they hear the different instrument sounds coming"}, {"start": 210.68, "end": 211.68, "text": " from."}, {"start": 211.68, "end": 217.04000000000002, "text": " In this case, the new method outperformed the previous techniques by a significant margin,"}, {"start": 217.04000000000002, "end": 222.12, "text": " and if we keep progressing like this, we may be at most a couple papers away from two"}, {"start": 222.12, "end": 227.4, "text": " and a half the audio synthesis that sounds indistinguishable from the real deal."}, {"start": 227.4, "end": 232.04000000000002, "text": " Looking forward to a future where we can enjoy all kinds of video content with this kind"}, {"start": 232.04000000000002, "end": 233.04000000000002, "text": " of immersion."}, {"start": 233.04, "end": 242.04, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=KhP7lTLTipc
6 Life Lessons I Learned From AI Research
Tensorflow experiment link: https://www.reddit.com/r/MachineLearning/comments/4eila2/tensorflow_playground/d20noqu/ Karpathy’s classifier neural network: https://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-1807526/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Random walk image source: https://en.wikipedia.org/wiki/Random_walk Tensorflow experiment link: https://www.reddit.com/r/MachineLearning/comments/4eila2/tensorflow_playground/d20noqu/ Karpathy’s classifier neural network: https://cs.stanford.edu/people/karpathy/convnetjs/demo/cifar10.html Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Károly Zsolnai-Fehér. This is going to be a weird, non-traditional episode, not the usual Two Minute Papers. Hope you'll enjoy it and if you've finished the video, please let me know in the comments what you think about it. So let's start. Life lessons I learned from AI Research. Number one, you need an objective. Before we start training a neural network to perform, for instance, image classification, we need a bunch of training data. This, we can feed to this neural network, telling that this image depicts a cat, and this one is not a cat, but an ostrich. We also need to specify a loss function. This is super important because this loss function is used to make sure that the neural network trains itself in a way that its predictions will be similar to the training data it is being fed. It is also referred to as an objective, or objective function, to indicate that we know precisely what we are looking for and that's what the neural network should do. This is a way to measure how the neural network is progressing and without this, it is useless. Similarly, in another learning problem, we can specify an objective for this end, which in this case is to be able to traverse as far from a starting point as possible, and it reconfigures its body type and movement to be able to score high on our objective. And this leads us to the second lesson. A change in the objective changes the strategy required to achieve it. Look here, in a different problem definition, we can specify a different objective. For instance, a different terrain, and you see that if we wish to succeed here, we need a vastly different body type. Form follows function. And in this other case, the objective is to be able to traverse efficiently, but with minimal material use for the legs. The solution, again, changes accordingly to the objective. New objectives require new strategies. Number three, if the objective was wrong, do not worry and aim again. Have a look at AlphaGo. This is DeepMind's algorithm that was able to defeat some of the best players in the world in the game of Go. This was a highly non-trivial achievement as the space of possible moves is so stupendously large that it is impossible to evaluate every move. Instead, it tries to aggressively focus on a smaller number of possible moves and tries to simulate the result of these moves. If the move leads to an improvement in our position, it is a good one. If not, it should be avoided. Sounds simple, right? Well, we have an objective, that's great, but initially it has a really bad predictor, which means that it is really bad at judging, which move is good, and which one isn't. However, over time, it refines its predictor, and these estimations improve further and further. In the end, by only taking a brief look at the state of the game, it can predict with a high confidence whether it is going to win or not. Initially, we have an objective, but how do we know whether it is a good one? Well, we try to get there and then evaluate our position. We may find that we got nowhere with this. What most people do is abort the program. Quit the game. Give up. It's over. No, don't despair. It's not over. This is the early stage of teaching an AI, and this is the time where we can improve our predictor, and pick our next objective more wisely. Over time, you'll find the ideas that don't work, and not only that, you'll find out why they don't work. Do not worry, and aim again. And this leads us to lesson number four. Zoom out and evaluate. This is exactly what DeepMind's amazing deep-cool learning algorithm does, that took the world by storm as it was able to play Atari Breakout on a superhuman level, just by looking at the pixels of the game. This algorithm ran in two phases, where phase one is collecting experiences, and phase two was called Experience Replay. This is where the AI stops and reflects upon these experiences. Zooming out and evaluating is immensely important, because after all, this is where the real learning happens. So, every now and then, zoom out and evaluate. And while we are here, I can simply not resist adding two more lessons I learned from other scientific disciplines. So, lesson number five, if you find something that works, hold on to it. This is exactly what Metropolis Light Transport does, which is a light simulation algorithm that is able to create beautiful images, even for extremely difficult problems, where it is challenging to find where the light is. However, when it finally finds something, it makes sure not to forget about it, and explore nearby light paths. It works like a charm for difficult light transport situations, and can create absolutely beautiful images for even the hardest virtual scenes. Seek the light and hold on to it. And whenever you feel that you are still not making progress, think about the following. Lesson number six, as long as you keep moving, you'll keep progressing. Take a look at this random walk. A random walk is a succession of steps in completely random directions. This walk is completely lack of direction, just as a drunkard that tries to find home. However, get this. A mathematical theorem says that after n steps, the expected distance from where we started is proportional to the square root of n. This is huge. What this means is that for instance, if we took four completely random steps, we are expected to be two units of distance away from where we started. That's progress. If we take a hundred steps, even then, we can expect to be around 10 units of distance away from the starting point. This concept works even if our predictor is completely haywire, and we choose our objectives like a drunkard. Now I think that's a lesson worth sharing. To recap, you need an objective. It can be anything, so long as you keep moving, you'll progress. If you have achieved it and it ended up not being what you were looking for, don't stop. Zoom out and reflect. This will help you to improve your predictor, and you will be able to recalibrate and aim again at something more meaningful. Now, aim and find a new objective. When you have a new objective, your strategy needs to change to be able to achieve it. Finally, if you find something desirable, hold on to it and explore more in this direction. Seek the light. Of course, you don't have to live your life this way, but I think these are interesting, mathematically motivated lessons that are worth showing to you. After all, this series is not only meant to inform, but to inspire you to get out there and create. It always feels absolutely amazing, getting these kind messages from you fellow scholars. Some of you said that the series has changed your life in a positive way. I am really out of words, and I'm honored to be able to make these videos for you fellow scholars. Let me know in the comments whether you enjoyed this episode, and please keep the kind messages coming they really make my day. Thanks for watching, and for your generous support, I'll see you next time.
[{"start": 0.0, "end": 4.08, "text": " Dear Fellow Scholars, this is Two Minute Papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.08, "end": 9.52, "text": " This is going to be a weird, non-traditional episode, not the usual Two Minute Papers."}, {"start": 9.52, "end": 14.88, "text": " Hope you'll enjoy it and if you've finished the video, please let me know in the comments what you think about it."}, {"start": 14.88, "end": 18.88, "text": " So let's start. Life lessons I learned from AI Research."}, {"start": 18.88, "end": 21.84, "text": " Number one, you need an objective."}, {"start": 21.84, "end": 27.52, "text": " Before we start training a neural network to perform, for instance, image classification,"}, {"start": 27.52, "end": 32.4, "text": " we need a bunch of training data. This, we can feed to this neural network,"}, {"start": 32.4, "end": 37.36, "text": " telling that this image depicts a cat, and this one is not a cat, but an ostrich."}, {"start": 37.36, "end": 40.24, "text": " We also need to specify a loss function."}, {"start": 40.24, "end": 46.8, "text": " This is super important because this loss function is used to make sure that the neural network trains"}, {"start": 46.8, "end": 52.32, "text": " itself in a way that its predictions will be similar to the training data it is being fed."}, {"start": 52.32, "end": 59.2, "text": " It is also referred to as an objective, or objective function, to indicate that we know precisely"}, {"start": 59.2, "end": 62.72, "text": " what we are looking for and that's what the neural network should do."}, {"start": 62.72, "end": 68.16, "text": " This is a way to measure how the neural network is progressing and without this, it is useless."}, {"start": 68.88, "end": 74.0, "text": " Similarly, in another learning problem, we can specify an objective for this end,"}, {"start": 74.0, "end": 79.76, "text": " which in this case is to be able to traverse as far from a starting point as possible,"}, {"start": 79.76, "end": 85.84, "text": " and it reconfigures its body type and movement to be able to score high on our objective."}, {"start": 86.24000000000001, "end": 88.88000000000001, "text": " And this leads us to the second lesson."}, {"start": 88.88000000000001, "end": 93.52000000000001, "text": " A change in the objective changes the strategy required to achieve it."}, {"start": 94.08000000000001, "end": 99.36000000000001, "text": " Look here, in a different problem definition, we can specify a different objective."}, {"start": 99.36000000000001, "end": 104.0, "text": " For instance, a different terrain, and you see that if we wish to succeed here,"}, {"start": 104.0, "end": 106.24000000000001, "text": " we need a vastly different body type."}, {"start": 106.88000000000001, "end": 108.32000000000001, "text": " Form follows function."}, {"start": 108.32, "end": 113.75999999999999, "text": " And in this other case, the objective is to be able to traverse efficiently,"}, {"start": 113.75999999999999, "end": 117.19999999999999, "text": " but with minimal material use for the legs."}, {"start": 117.19999999999999, "end": 121.03999999999999, "text": " The solution, again, changes accordingly to the objective."}, {"start": 121.03999999999999, "end": 124.32, "text": " New objectives require new strategies."}, {"start": 124.32, "end": 129.6, "text": " Number three, if the objective was wrong, do not worry and aim again."}, {"start": 129.6, "end": 131.35999999999999, "text": " Have a look at AlphaGo."}, {"start": 131.35999999999999, "end": 137.84, "text": " This is DeepMind's algorithm that was able to defeat some of the best players in the world in the game of Go."}, {"start": 137.84, "end": 143.36, "text": " This was a highly non-trivial achievement as the space of possible moves is so"}, {"start": 143.36, "end": 147.36, "text": " stupendously large that it is impossible to evaluate every move."}, {"start": 147.36, "end": 152.72, "text": " Instead, it tries to aggressively focus on a smaller number of possible moves"}, {"start": 152.72, "end": 155.84, "text": " and tries to simulate the result of these moves."}, {"start": 155.84, "end": 159.92000000000002, "text": " If the move leads to an improvement in our position, it is a good one."}, {"start": 159.92000000000002, "end": 161.76, "text": " If not, it should be avoided."}, {"start": 161.76, "end": 163.76, "text": " Sounds simple, right?"}, {"start": 163.76, "end": 170.64, "text": " Well, we have an objective, that's great, but initially it has a really bad predictor,"}, {"start": 170.64, "end": 175.6, "text": " which means that it is really bad at judging, which move is good, and which one isn't."}, {"start": 175.6, "end": 182.0, "text": " However, over time, it refines its predictor, and these estimations improve further and further."}, {"start": 182.0, "end": 186.32, "text": " In the end, by only taking a brief look at the state of the game,"}, {"start": 186.32, "end": 190.95999999999998, "text": " it can predict with a high confidence whether it is going to win or not."}, {"start": 190.96, "end": 196.72, "text": " Initially, we have an objective, but how do we know whether it is a good one?"}, {"start": 196.72, "end": 201.20000000000002, "text": " Well, we try to get there and then evaluate our position."}, {"start": 201.20000000000002, "end": 203.92000000000002, "text": " We may find that we got nowhere with this."}, {"start": 203.92000000000002, "end": 206.8, "text": " What most people do is abort the program."}, {"start": 206.8, "end": 208.08, "text": " Quit the game."}, {"start": 208.08, "end": 209.12, "text": " Give up."}, {"start": 209.12, "end": 210.08, "text": " It's over."}, {"start": 210.08, "end": 212.24, "text": " No, don't despair."}, {"start": 212.24, "end": 213.44, "text": " It's not over."}, {"start": 213.44, "end": 219.44, "text": " This is the early stage of teaching an AI, and this is the time where we can improve our predictor,"}, {"start": 219.44, "end": 222.88, "text": " and pick our next objective more wisely."}, {"start": 222.88, "end": 229.52, "text": " Over time, you'll find the ideas that don't work, and not only that, you'll find out why they don't work."}, {"start": 229.52, "end": 231.68, "text": " Do not worry, and aim again."}, {"start": 231.68, "end": 234.16, "text": " And this leads us to lesson number four."}, {"start": 234.16, "end": 236.16, "text": " Zoom out and evaluate."}, {"start": 236.16, "end": 240.56, "text": " This is exactly what DeepMind's amazing deep-cool learning algorithm does,"}, {"start": 240.56, "end": 246.48, "text": " that took the world by storm as it was able to play Atari Breakout on a superhuman level,"}, {"start": 246.48, "end": 249.44, "text": " just by looking at the pixels of the game."}, {"start": 249.44, "end": 255.04, "text": " This algorithm ran in two phases, where phase one is collecting experiences,"}, {"start": 255.04, "end": 258.4, "text": " and phase two was called Experience Replay."}, {"start": 258.4, "end": 263.2, "text": " This is where the AI stops and reflects upon these experiences."}, {"start": 263.2, "end": 267.68, "text": " Zooming out and evaluating is immensely important, because after all,"}, {"start": 267.68, "end": 270.24, "text": " this is where the real learning happens."}, {"start": 270.24, "end": 274.08, "text": " So, every now and then, zoom out and evaluate."}, {"start": 274.08, "end": 280.96, "text": " And while we are here, I can simply not resist adding two more lessons I learned from other scientific disciplines."}, {"start": 280.96, "end": 286.32, "text": " So, lesson number five, if you find something that works, hold on to it."}, {"start": 286.32, "end": 289.52, "text": " This is exactly what Metropolis Light Transport does,"}, {"start": 289.52, "end": 294.32, "text": " which is a light simulation algorithm that is able to create beautiful images,"}, {"start": 294.32, "end": 299.68, "text": " even for extremely difficult problems, where it is challenging to find where the light is."}, {"start": 299.68, "end": 304.8, "text": " However, when it finally finds something, it makes sure not to forget about it,"}, {"start": 304.8, "end": 307.36, "text": " and explore nearby light paths."}, {"start": 307.36, "end": 310.8, "text": " It works like a charm for difficult light transport situations,"}, {"start": 310.8, "end": 316.48, "text": " and can create absolutely beautiful images for even the hardest virtual scenes."}, {"start": 316.48, "end": 318.96000000000004, "text": " Seek the light and hold on to it."}, {"start": 318.96000000000004, "end": 323.68, "text": " And whenever you feel that you are still not making progress, think about the following."}, {"start": 323.68, "end": 328.4, "text": " Lesson number six, as long as you keep moving, you'll keep progressing."}, {"start": 328.4, "end": 330.23999999999995, "text": " Take a look at this random walk."}, {"start": 330.23999999999995, "end": 335.03999999999996, "text": " A random walk is a succession of steps in completely random directions."}, {"start": 335.03999999999996, "end": 340.32, "text": " This walk is completely lack of direction, just as a drunkard that tries to find home."}, {"start": 340.32, "end": 342.15999999999997, "text": " However, get this."}, {"start": 342.15999999999997, "end": 348.23999999999995, "text": " A mathematical theorem says that after n steps, the expected distance from where we started"}, {"start": 348.23999999999995, "end": 351.67999999999995, "text": " is proportional to the square root of n."}, {"start": 351.67999999999995, "end": 353.12, "text": " This is huge."}, {"start": 353.12, "end": 357.84, "text": " What this means is that for instance, if we took four completely random steps,"}, {"start": 357.84, "end": 362.4, "text": " we are expected to be two units of distance away from where we started."}, {"start": 362.4, "end": 363.84, "text": " That's progress."}, {"start": 363.84, "end": 371.03999999999996, "text": " If we take a hundred steps, even then, we can expect to be around 10 units of distance away from the starting point."}, {"start": 371.03999999999996, "end": 375.12, "text": " This concept works even if our predictor is completely haywire,"}, {"start": 375.12, "end": 377.84, "text": " and we choose our objectives like a drunkard."}, {"start": 377.84, "end": 380.32, "text": " Now I think that's a lesson worth sharing."}, {"start": 380.32, "end": 382.71999999999997, "text": " To recap, you need an objective."}, {"start": 382.71999999999997, "end": 386.96, "text": " It can be anything, so long as you keep moving, you'll progress."}, {"start": 386.96, "end": 390.71999999999997, "text": " If you have achieved it and it ended up not being what you were looking for,"}, {"start": 390.71999999999997, "end": 391.76, "text": " don't stop."}, {"start": 391.76, "end": 393.76, "text": " Zoom out and reflect."}, {"start": 393.76, "end": 396.0, "text": " This will help you to improve your predictor,"}, {"start": 396.0, "end": 401.59999999999997, "text": " and you will be able to recalibrate and aim again at something more meaningful."}, {"start": 401.59999999999997, "end": 404.47999999999996, "text": " Now, aim and find a new objective."}, {"start": 404.47999999999996, "end": 409.59999999999997, "text": " When you have a new objective, your strategy needs to change to be able to achieve it."}, {"start": 409.59999999999997, "end": 412.4, "text": " Finally, if you find something desirable,"}, {"start": 412.4, "end": 415.76, "text": " hold on to it and explore more in this direction."}, {"start": 415.76, "end": 417.12, "text": " Seek the light."}, {"start": 417.12, "end": 419.52, "text": " Of course, you don't have to live your life this way,"}, {"start": 419.52, "end": 421.28, "text": " but I think these are interesting,"}, {"start": 421.28, "end": 424.96, "text": " mathematically motivated lessons that are worth showing to you."}, {"start": 424.96, "end": 428.08, "text": " After all, this series is not only meant to inform,"}, {"start": 428.08, "end": 431.12, "text": " but to inspire you to get out there and create."}, {"start": 431.12, "end": 433.36, "text": " It always feels absolutely amazing,"}, {"start": 433.36, "end": 436.08, "text": " getting these kind messages from you fellow scholars."}, {"start": 436.08, "end": 440.24, "text": " Some of you said that the series has changed your life in a positive way."}, {"start": 440.24, "end": 445.68, "text": " I am really out of words, and I'm honored to be able to make these videos for you fellow scholars."}, {"start": 445.68, "end": 448.48, "text": " Let me know in the comments whether you enjoyed this episode,"}, {"start": 448.48, "end": 452.16, "text": " and please keep the kind messages coming they really make my day."}, {"start": 452.16, "end": 454.40000000000003, "text": " Thanks for watching, and for your generous support,"}, {"start": 454.4, "end": 484.32, "text": " I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=T8YOzqy7t5Y
This AI Learns From Humans…and Exceeds Them
The paper "Reward learning from human preferences and demonstrations in Atari" is available here: https://arxiv.org/abs/1811.06521 Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 1a5ttKiVQiDcr9j8JT2DoHGzLG7XTJccX › Ethereum: 0xbBD767C0e14be1886c6610bf3F592A91D866d380 › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-3846345/ + https://upload.wikimedia.org/wikipedia/en/7/70/HERO_A800_ingame.png Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zornaifahir. This is a collaboration between deep-mind and open AI on using human demonstrations to teach an AI to play games really well. The basis of this work is reinforcement learning, which is about choosing a set of actions in an environment to maximize a score. For some games, this score is typically provided by the game itself, but in more complex games, for instance, once that require exploration, this score is not too useful to train an AI. In this project, the key idea is to use human demonstrations to teach an AI how to succeed. This means that we can sit down, play the game, show the footage to the AI, and hope that it learns something useful from it. Now, the most trivial implementation of this would be to imitate the footage too closely, or, in other words, simply redo what the human has done. That would be a trivial endeavor, and it is the most common way of misunderstanding what is happening here, so I will emphasize that this is not the case. Just imitating what the human player does would not be very useful, because one, it puts too much burden on the humans, that's not what we want, and number two, the AI could not be significantly better than the human demonstrator, and that's also not what we want. In fact, if we have a look at the paper, the first figure shows us right away how badly a simpler imitation program performs. That's not what this algorithm is doing. What it does instead is that it looks at the footage as the human plays the game and tries to guess what they were trying to accomplish. Then, we can tell a reinforcement learner that this is now our reward function, and it should train to become better at that. As you see here, it can play an exploration heavy game, such as Atari Hero, and in the footage above, you see the rewards over time, the higher the better. This AI performs really well in this game, and significantly outperforms reinforcement learner agents trained from scratch on Montezuma's revenge as well, although it can still get stuck on a ladder. We discussed earlier a curious AI that was quickly getting bored by ladders and moved on to more exciting endeavors in the game. The performance of the new agent seems roughly equivalent to an agent trained from scratch in the game Pong, presumably because of the lack of exploration and the fact that it is very easy to understand how to score points in this game. But wait, in the previous episode we just talked about an algorithm where we didn't even need to play, we could just sit in our favorite armchair and direct the algorithm. So, why play? Well, just providing feedback is clearly very convenient, but as we can only specify what we liked and what we didn't like, it is not very efficient. With the human demonstrations here, we can immediately show the AI what we are looking for, and as it is able to learn the principles and then improve further, and eventually become better than the human demonstrator, this work provides a highly desirable alternative to already existing techniques, loving it. If you have a look at the paper, you will also see how the authors incorporated a cool additional step to the pipeline where we can add annotations to the training footage, so make sure to have a look. Also, if you feel that a bunch of these AI videos a month are worth a dollar, please consider supporting us at patreon.com slash 2 Minute Papers. You can also pick up cool perks like getting early access to all of these episodes, or getting your name immortalized in the video description. We also support crypto currencies and one-time payments, the links and additional information to all of these are available in the video description. With your support, we can make better videos for you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.12, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zornaifahir."}, {"start": 4.12, "end": 13.16, "text": " This is a collaboration between deep-mind and open AI on using human demonstrations to teach an AI to play games really well."}, {"start": 13.16, "end": 21.0, "text": " The basis of this work is reinforcement learning, which is about choosing a set of actions in an environment to maximize a score."}, {"start": 21.0, "end": 25.400000000000002, "text": " For some games, this score is typically provided by the game itself,"}, {"start": 25.4, "end": 33.599999999999994, "text": " but in more complex games, for instance, once that require exploration, this score is not too useful to train an AI."}, {"start": 33.599999999999994, "end": 40.4, "text": " In this project, the key idea is to use human demonstrations to teach an AI how to succeed."}, {"start": 40.4, "end": 48.8, "text": " This means that we can sit down, play the game, show the footage to the AI, and hope that it learns something useful from it."}, {"start": 48.8, "end": 58.599999999999994, "text": " Now, the most trivial implementation of this would be to imitate the footage too closely, or, in other words, simply redo what the human has done."}, {"start": 58.599999999999994, "end": 68.2, "text": " That would be a trivial endeavor, and it is the most common way of misunderstanding what is happening here, so I will emphasize that this is not the case."}, {"start": 68.2, "end": 76.2, "text": " Just imitating what the human player does would not be very useful, because one, it puts too much burden on the humans,"}, {"start": 76.2, "end": 84.60000000000001, "text": " that's not what we want, and number two, the AI could not be significantly better than the human demonstrator, and that's also not what we want."}, {"start": 84.60000000000001, "end": 93.4, "text": " In fact, if we have a look at the paper, the first figure shows us right away how badly a simpler imitation program performs."}, {"start": 93.4, "end": 103.80000000000001, "text": " That's not what this algorithm is doing. What it does instead is that it looks at the footage as the human plays the game and tries to guess what they were trying to accomplish."}, {"start": 103.8, "end": 111.39999999999999, "text": " Then, we can tell a reinforcement learner that this is now our reward function, and it should train to become better at that."}, {"start": 111.39999999999999, "end": 121.4, "text": " As you see here, it can play an exploration heavy game, such as Atari Hero, and in the footage above, you see the rewards over time, the higher the better."}, {"start": 121.4, "end": 134.20000000000002, "text": " This AI performs really well in this game, and significantly outperforms reinforcement learner agents trained from scratch on Montezuma's revenge as well, although it can still get stuck on a ladder."}, {"start": 142.6, "end": 151.20000000000002, "text": " We discussed earlier a curious AI that was quickly getting bored by ladders and moved on to more exciting endeavors in the game."}, {"start": 151.2, "end": 166.0, "text": " The performance of the new agent seems roughly equivalent to an agent trained from scratch in the game Pong, presumably because of the lack of exploration and the fact that it is very easy to understand how to score points in this game."}, {"start": 166.0, "end": 177.2, "text": " But wait, in the previous episode we just talked about an algorithm where we didn't even need to play, we could just sit in our favorite armchair and direct the algorithm."}, {"start": 177.2, "end": 189.2, "text": " So, why play? Well, just providing feedback is clearly very convenient, but as we can only specify what we liked and what we didn't like, it is not very efficient."}, {"start": 189.2, "end": 203.2, "text": " With the human demonstrations here, we can immediately show the AI what we are looking for, and as it is able to learn the principles and then improve further, and eventually become better than the human demonstrator,"}, {"start": 203.2, "end": 209.2, "text": " this work provides a highly desirable alternative to already existing techniques, loving it."}, {"start": 209.2, "end": 221.2, "text": " If you have a look at the paper, you will also see how the authors incorporated a cool additional step to the pipeline where we can add annotations to the training footage, so make sure to have a look."}, {"start": 221.2, "end": 230.2, "text": " Also, if you feel that a bunch of these AI videos a month are worth a dollar, please consider supporting us at patreon.com slash 2 Minute Papers."}, {"start": 230.2, "end": 239.2, "text": " You can also pick up cool perks like getting early access to all of these episodes, or getting your name immortalized in the video description."}, {"start": 239.2, "end": 247.2, "text": " We also support crypto currencies and one-time payments, the links and additional information to all of these are available in the video description."}, {"start": 247.2, "end": 250.2, "text": " With your support, we can make better videos for you."}, {"start": 250.2, "end": 260.2, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=pc_k-sgUYmY
DeepMind’s Take on How To Create a Benign AI
The paper "Scalable agent alignment via reward modeling: a research direction" is available here: 1. https://arxiv.org/abs/1811.07871 2. https://medium.com/@deepmindsafetyresearch/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84 Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 1a5ttKiVQiDcr9j8JT2DoHGzLG7XTJccX › Ethereum: 0xbBD767C0e14be1886c6610bf3F592A91D866d380 › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-3706562/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This episode does not have the usual visual fireworks, but I really wanted to cover this paper because it tells a story that is, I think, very important for all of us to hear about. When creating a new AI to help us with a task, we have to somehow tell this AI what we consider to be a desirable solution. If everything goes well, it will find out the best way to accomplish it. This is easy when playing simpler video games because we can just tell the algorithm to maximize the score seen in the game. For instance, the more breaks we hit in Atari Breakout, the closer we get to finishing the level. However, in real life, we don't have anyone giving us a score to tell us how close we are to our objective. What's even worse, sometimes we have to make decisions that seem bad at the time, but will serve us well in the future. Trying to save money or studying for a few years longer are typical life decisions that pay off in the long run, but may seem undesirable at the time. The opposite is also true. Ideas that may sound right at the time may immediately backfire. When in a car chase, don't ask the car AI to unload all unnecessary weights to go faster, or if you do, prepare to be promptly ejected from the car. So how can we possibly create an AI that somehow understands our intentions and acts in line with them? That's a challenging question and is often referred to as the agent alignment problem. It has to be aligned with our values. What can we do about this? Well, short of having a mind reading device, we can maybe control the behavior of the AI through its reward system. This is a deep mind just published a paper on this topic where they started their thought process from two assumptions. Assumption number one, quoting the authors. For many tasks we want to solve, evaluation of outcomes is easier than producing the correct behavior. In short, it is easier to yell at the TV than to become an athlete. Sounds reasonable, right? Note that from complexity theory, we know that this does not always hold, but it is indeed true for a large number of difficult problems. Assumption number two, user intentions can be learned with high accuracy. In other words, given enough data that somehow relates to our intentions, the AI should be able to learn that. Leaning on these two assumptions, we can change the basic formulation of reinforcement learning in the following way. Normally, we have an agent that chooses a set of actions in an environment to maximize a score. For instance, this could mean moving the pedal around to hit as many blocks as possible and finish the level. They extended this formulation in a way that the user can periodically provide feedback on how the score should be calculated. Now, the AI will try to maximize this new score, and we hope that this will be more in line with our intentions. Or, in our car chase example, we could modify our reward to make sure we remain in the car and not get ejected. Perhaps the most remarkable property of this formulation is that it doesn't even require us to, for instance, play the game at all to demonstrate our intentions to the algorithm. The formulation follows our principles and not our actions. We can just sit in our favorite armchair, bend the AI to our will by changing the reward function every now and then, and let the AI do the grueling work. This is like yelling at the TV except that it actually works. Loving the idea. If you have a look at the paper, you will see a ton more details on how to do this efficiently and a case study with a few Atari games. Also, since this has a lot of implications pertaining to AI safety and how to create a lined agents, an increasingly important topic these days, huge respect for deep mind for investing more and more of their time and money in this area. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.32, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.32, "end": 9.84, "text": " This episode does not have the usual visual fireworks, but I really wanted to cover this paper"}, {"start": 9.84, "end": 15.64, "text": " because it tells a story that is, I think, very important for all of us to hear about."}, {"start": 15.64, "end": 21.56, "text": " When creating a new AI to help us with a task, we have to somehow tell this AI what we consider"}, {"start": 21.56, "end": 23.72, "text": " to be a desirable solution."}, {"start": 23.72, "end": 28.32, "text": " If everything goes well, it will find out the best way to accomplish it."}, {"start": 28.32, "end": 33.88, "text": " This is easy when playing simpler video games because we can just tell the algorithm to maximize"}, {"start": 33.88, "end": 36.12, "text": " the score seen in the game."}, {"start": 36.12, "end": 40.88, "text": " For instance, the more breaks we hit in Atari Breakout, the closer we get to finishing the"}, {"start": 40.88, "end": 41.88, "text": " level."}, {"start": 41.88, "end": 47.68, "text": " However, in real life, we don't have anyone giving us a score to tell us how close we are"}, {"start": 47.68, "end": 49.16, "text": " to our objective."}, {"start": 49.16, "end": 54.04, "text": " What's even worse, sometimes we have to make decisions that seem bad at the time, but"}, {"start": 54.04, "end": 56.32, "text": " will serve us well in the future."}, {"start": 56.32, "end": 60.92, "text": " Trying to save money or studying for a few years longer are typical life decisions that"}, {"start": 60.92, "end": 65.72, "text": " pay off in the long run, but may seem undesirable at the time."}, {"start": 65.72, "end": 67.76, "text": " The opposite is also true."}, {"start": 67.76, "end": 71.88, "text": " Ideas that may sound right at the time may immediately backfire."}, {"start": 71.88, "end": 78.48, "text": " When in a car chase, don't ask the car AI to unload all unnecessary weights to go faster,"}, {"start": 78.48, "end": 82.76, "text": " or if you do, prepare to be promptly ejected from the car."}, {"start": 82.76, "end": 89.76, "text": " So how can we possibly create an AI that somehow understands our intentions and acts in line"}, {"start": 89.76, "end": 90.76, "text": " with them?"}, {"start": 90.76, "end": 96.32000000000001, "text": " That's a challenging question and is often referred to as the agent alignment problem."}, {"start": 96.32000000000001, "end": 98.84, "text": " It has to be aligned with our values."}, {"start": 98.84, "end": 100.36000000000001, "text": " What can we do about this?"}, {"start": 100.36000000000001, "end": 106.32000000000001, "text": " Well, short of having a mind reading device, we can maybe control the behavior of the AI"}, {"start": 106.32000000000001, "end": 108.84, "text": " through its reward system."}, {"start": 108.84, "end": 113.08, "text": " This is a deep mind just published a paper on this topic where they started their thought"}, {"start": 113.08, "end": 116.04, "text": " process from two assumptions."}, {"start": 116.04, "end": 118.4, "text": " Assumption number one, quoting the authors."}, {"start": 118.4, "end": 123.84, "text": " For many tasks we want to solve, evaluation of outcomes is easier than producing the"}, {"start": 123.84, "end": 125.44, "text": " correct behavior."}, {"start": 125.44, "end": 130.28, "text": " In short, it is easier to yell at the TV than to become an athlete."}, {"start": 130.28, "end": 131.76, "text": " Sounds reasonable, right?"}, {"start": 131.76, "end": 137.52, "text": " Note that from complexity theory, we know that this does not always hold, but it is indeed"}, {"start": 137.52, "end": 141.20000000000002, "text": " true for a large number of difficult problems."}, {"start": 141.20000000000002, "end": 145.84, "text": " Assumption number two, user intentions can be learned with high accuracy."}, {"start": 145.84, "end": 151.16000000000003, "text": " In other words, given enough data that somehow relates to our intentions, the AI should"}, {"start": 151.16000000000003, "end": 153.08, "text": " be able to learn that."}, {"start": 153.08, "end": 158.04000000000002, "text": " Leaning on these two assumptions, we can change the basic formulation of reinforcement"}, {"start": 158.04000000000002, "end": 160.20000000000002, "text": " learning in the following way."}, {"start": 160.20000000000002, "end": 164.96, "text": " Normally, we have an agent that chooses a set of actions in an environment to maximize"}, {"start": 164.96, "end": 166.16000000000003, "text": " a score."}, {"start": 166.16, "end": 171.56, "text": " For instance, this could mean moving the pedal around to hit as many blocks as possible"}, {"start": 171.56, "end": 173.72, "text": " and finish the level."}, {"start": 173.72, "end": 178.8, "text": " They extended this formulation in a way that the user can periodically provide feedback"}, {"start": 178.8, "end": 181.16, "text": " on how the score should be calculated."}, {"start": 181.16, "end": 186.72, "text": " Now, the AI will try to maximize this new score, and we hope that this will be more in"}, {"start": 186.72, "end": 188.72, "text": " line with our intentions."}, {"start": 188.72, "end": 194.16, "text": " Or, in our car chase example, we could modify our reward to make sure we remain in the"}, {"start": 194.16, "end": 196.68, "text": " car and not get ejected."}, {"start": 196.68, "end": 201.44, "text": " Perhaps the most remarkable property of this formulation is that it doesn't even require"}, {"start": 201.44, "end": 207.28, "text": " us to, for instance, play the game at all to demonstrate our intentions to the algorithm."}, {"start": 207.28, "end": 212.0, "text": " The formulation follows our principles and not our actions."}, {"start": 212.0, "end": 217.24, "text": " We can just sit in our favorite armchair, bend the AI to our will by changing the reward"}, {"start": 217.24, "end": 222.24, "text": " function every now and then, and let the AI do the grueling work."}, {"start": 222.24, "end": 226.56, "text": " This is like yelling at the TV except that it actually works."}, {"start": 226.56, "end": 227.84, "text": " Loving the idea."}, {"start": 227.84, "end": 233.0, "text": " If you have a look at the paper, you will see a ton more details on how to do this efficiently"}, {"start": 233.0, "end": 236.0, "text": " and a case study with a few Atari games."}, {"start": 236.0, "end": 241.36, "text": " Also, since this has a lot of implications pertaining to AI safety and how to create a"}, {"start": 241.36, "end": 246.56, "text": " lined agents, an increasingly important topic these days, huge respect for deep mind"}, {"start": 246.56, "end": 250.44, "text": " for investing more and more of their time and money in this area."}, {"start": 250.44, "end": 254.44, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=q22XWPM0Egc
AI Learning Morphology and Movement...at the Same Time!
The paper "Reinforcement Learning for Improving Agent Design" is available here: https://designrl.github.io/ https://arxiv.org/abs/1810.03779 Our job posting for a PostDoc: https://www.cg.tuwien.ac.at/jobs/3dspatialization/ Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-1130497/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. Reinforcement learning is a class of learning algorithms that chooses a set of actions in an environment to maximize the score. Typical use cases of this include writing an AI to master video games, or avoiding obstacles with a drone and many more cool applications. What ties most of these ideas together is that whenever we talk about reinforcement learning, we typically mean teaching an agent how to navigate in an environment. A few years ago, a really fun online app surfaced that used a genetic algorithm to evolve the morphology of a simple 2D car with the goal of having it roll as far away from a starting point as possible. It used a genetic algorithm that is quite primitive compared to modern machine learning techniques and yet it still does well on this, so how about testing a proper reinforcement learner to optimize the body of the agent? What's more, what if we could jointly learn both the body and the navigation at the same time? Okay, so what does this mean in practice? Let's have a look at an example. Here, we have an ant that is supported by four legs, each consisting of three parts that are controlled by two motor joints. With the classical problem formulation, we can teach this ant to use these joints to learn to walk, but in the new formulation, not only the movement, but the body morphology is also subject to change. As a result, this ant learned that the body can also be carried by longer, thinner legs and adjusted itself accordingly. As a plus, it also learned how to walk with these new legs and this way it was able to outclass the original agent. In this other example, the agent learns to more efficiently navigate a flat terrain by redesigning its legs that are now reminiscent of small springs and uses them to skip its way forward. Of course, if we change the terrain, the design of an effective agent also changes accordingly and the super interesting part here is that it came up with an asymmetric design that is able to climb stairs and travel uphill efficiently. Loving it. We can also task this technique to minimize the amount of building materials used to solve a task and subsequently, it builds an adorable little agent with tiny legs that is still able to efficiently traverse this flat terrain. This principle can also be applied to the more difficult version of this terrain which results in a lean, insect-like solution that can still finish this level that uses about 75% less materials than the original solution. And again, remember that not only the design, but the movement is learned here at the same time. While we look at these really fun bloopers, I'd like to let you know that we have an opening at our institute at the Vienna University of Technology for one postdoctoral researcher. The link is available in the video description, read it carefully to make sure you qualify, and if you apply through the specified email address, make sure to mention two minute papers in your message. This is an excellent opportunity to read and write amazing papers and work with some of the sweetest people. This is not standard practice in all countries, so I will note that you can check the salary right in the call, it is a well-paid position in my opinion, and you get to live in Vienna. Also, your salary is paid not 12, but 14 times a year. That's Austria for you. It doesn't get any better than that. That line is end of January. Happy holidays to all of you. Thanks for watching and for your generous support, and I'll see you early January.
[{"start": 0.0, "end": 4.64, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.64, "end": 9.44, "text": " Reinforcement learning is a class of learning algorithms that chooses a set of actions in"}, {"start": 9.44, "end": 12.200000000000001, "text": " an environment to maximize the score."}, {"start": 12.200000000000001, "end": 18.32, "text": " Typical use cases of this include writing an AI to master video games, or avoiding obstacles"}, {"start": 18.32, "end": 21.8, "text": " with a drone and many more cool applications."}, {"start": 21.8, "end": 27.080000000000002, "text": " What ties most of these ideas together is that whenever we talk about reinforcement learning,"}, {"start": 27.08, "end": 31.799999999999997, "text": " we typically mean teaching an agent how to navigate in an environment."}, {"start": 31.799999999999997, "end": 38.239999999999995, "text": " A few years ago, a really fun online app surfaced that used a genetic algorithm to evolve"}, {"start": 38.239999999999995, "end": 45.0, "text": " the morphology of a simple 2D car with the goal of having it roll as far away from a starting"}, {"start": 45.0, "end": 46.72, "text": " point as possible."}, {"start": 46.72, "end": 52.480000000000004, "text": " It used a genetic algorithm that is quite primitive compared to modern machine learning techniques"}, {"start": 52.48, "end": 58.8, "text": " and yet it still does well on this, so how about testing a proper reinforcement learner"}, {"start": 58.8, "end": 62.199999999999996, "text": " to optimize the body of the agent?"}, {"start": 62.199999999999996, "end": 68.84, "text": " What's more, what if we could jointly learn both the body and the navigation at the same"}, {"start": 68.84, "end": 69.84, "text": " time?"}, {"start": 69.84, "end": 72.67999999999999, "text": " Okay, so what does this mean in practice?"}, {"start": 72.67999999999999, "end": 74.8, "text": " Let's have a look at an example."}, {"start": 74.8, "end": 80.8, "text": " Here, we have an ant that is supported by four legs, each consisting of three parts that"}, {"start": 80.8, "end": 83.64, "text": " are controlled by two motor joints."}, {"start": 83.64, "end": 88.47999999999999, "text": " With the classical problem formulation, we can teach this ant to use these joints to learn"}, {"start": 88.47999999999999, "end": 95.12, "text": " to walk, but in the new formulation, not only the movement, but the body morphology is also"}, {"start": 95.12, "end": 96.92, "text": " subject to change."}, {"start": 96.92, "end": 103.47999999999999, "text": " As a result, this ant learned that the body can also be carried by longer, thinner legs"}, {"start": 103.47999999999999, "end": 106.0, "text": " and adjusted itself accordingly."}, {"start": 106.0, "end": 111.36, "text": " As a plus, it also learned how to walk with these new legs and this way it was able to"}, {"start": 111.36, "end": 114.16, "text": " outclass the original agent."}, {"start": 114.16, "end": 119.56, "text": " In this other example, the agent learns to more efficiently navigate a flat terrain by"}, {"start": 119.56, "end": 125.92, "text": " redesigning its legs that are now reminiscent of small springs and uses them to skip its"}, {"start": 125.92, "end": 127.32, "text": " way forward."}, {"start": 127.32, "end": 133.32, "text": " Of course, if we change the terrain, the design of an effective agent also changes accordingly"}, {"start": 133.32, "end": 138.2, "text": " and the super interesting part here is that it came up with an asymmetric design that"}, {"start": 138.2, "end": 142.95999999999998, "text": " is able to climb stairs and travel uphill efficiently."}, {"start": 142.95999999999998, "end": 144.44, "text": " Loving it."}, {"start": 144.44, "end": 149.4, "text": " We can also task this technique to minimize the amount of building materials used to solve"}, {"start": 149.4, "end": 155.64, "text": " a task and subsequently, it builds an adorable little agent with tiny legs that is still"}, {"start": 155.64, "end": 159.04, "text": " able to efficiently traverse this flat terrain."}, {"start": 159.04, "end": 163.35999999999999, "text": " This principle can also be applied to the more difficult version of this terrain which"}, {"start": 163.35999999999999, "end": 169.48, "text": " results in a lean, insect-like solution that can still finish this level that uses about"}, {"start": 169.48, "end": 173.6, "text": " 75% less materials than the original solution."}, {"start": 173.6, "end": 178.56, "text": " And again, remember that not only the design, but the movement is learned here at the same"}, {"start": 178.56, "end": 179.56, "text": " time."}, {"start": 179.56, "end": 183.16, "text": " While we look at these really fun bloopers, I'd like to let you know that we have an"}, {"start": 183.16, "end": 189.44, "text": " opening at our institute at the Vienna University of Technology for one postdoctoral researcher."}, {"start": 189.44, "end": 194.68, "text": " The link is available in the video description, read it carefully to make sure you qualify,"}, {"start": 194.68, "end": 199.2, "text": " and if you apply through the specified email address, make sure to mention two minute"}, {"start": 199.2, "end": 200.72, "text": " papers in your message."}, {"start": 200.72, "end": 206.2, "text": " This is an excellent opportunity to read and write amazing papers and work with some"}, {"start": 206.2, "end": 207.88, "text": " of the sweetest people."}, {"start": 207.88, "end": 212.44, "text": " This is not standard practice in all countries, so I will note that you can check the salary"}, {"start": 212.44, "end": 218.24, "text": " right in the call, it is a well-paid position in my opinion, and you get to live in Vienna."}, {"start": 218.24, "end": 223.2, "text": " Also, your salary is paid not 12, but 14 times a year."}, {"start": 223.2, "end": 224.52, "text": " That's Austria for you."}, {"start": 224.52, "end": 226.64, "text": " It doesn't get any better than that."}, {"start": 226.64, "end": 228.48, "text": " That line is end of January."}, {"start": 228.48, "end": 229.96, "text": " Happy holidays to all of you."}, {"start": 229.96, "end": 242.56, "text": " Thanks for watching and for your generous support, and I'll see you early January."}]
Two Minute Papers
https://www.youtube.com/watch?v=ZKQp28OqwNQ
BigGANs: AI-Based High-Fidelity Image Synthesis
This episode was supported by insilico.com. "Anything outside life extension is a complete waste of time". See their papers: - Papers: https://www.ncbi.nlm.nih.gov/pubmed/?term=Zhavoronkov%2Ba - Website: http://insilico.com/ The paper "Large Scale GAN Training for High Fidelity Natural Image Synthesis" is available here: - Paper: https://arxiv.org/abs/1809.11096 - Try it here: https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/biggan_generation_with_tf_hub.ipynb We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-494706/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. Approximately 150 episodes ago, we looked at deep minds amazing algorithm that was able to look at a database with images of birds, and it could learn about them so much that we could provide a text description of an imaginary bird type and it would dream up new images of them. It was a truly breathtaking piece of work, and its main limitation was that it could only come up with course images. It didn't give us a lot of details. Later, we talked about Nvidia's algorithm that started out with such a course image, but didn't stop there. It progressively recomputed this image many times, each time with more and more details. This was able to create imaginary celebrities with tons of detail. This new work offers a number of valuable improvements over the previous techniques. It can train bigger neural networks with even more parameters, create extremely detailed images with remarkable performance, so much so that if you have a reasonably powerful graphics card, you can run it yourself here. The link is in the video description. Training these neural networks is also more stable than it used to be with previous techniques. As a result, it not only supports creating these absolutely beautiful images, but also gives us the opportunity to exert artistic control on the outputs. I think this is super fun. I could play with this all day long. What's more, we can also interpolate between these images, which means that if we have desirable images A and B, it can compute intermediate images between them, and the challenging part is that these intermediate images shouldn't be some sort of average between the two, which would be gibberish, but they would have to be images that are meaningful and can stand on their own. Look at this. Flying colors. And now comes the best part. The results were measured in terms of their inception score. This inception score defines how recognizable and diverse these generated images are, and most importantly, both of these are codified in a mathematical manner to reduce the subjectivity of the evaluation. This score is not perfect by any means, but it typically correlates well with the scores given by humans. The best of the earlier works had an inception score of around 50. And hold on to your papers because the score of this new technique is no less than 166, and if we would measure real images, they would score around 233. What an incredible leap in technology. And we are even being paid for creating and playing with such learning algorithms. What a time to be alive. A big thumbs up for the authors of the paper for providing quite a bit of information on failure cases as well. We also thank Incelico Medicine for supporting this video. They are using these amazing learning algorithms to create new molecules, identify new protein targets with the aim to cure diseases, and aging itself. Make sure to check them out in the video description. They are our first sponsors, and it's been such a joy to work with them. Thanks for watching, and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.4, "end": 10.68, "text": " Approximately 150 episodes ago, we looked at deep minds amazing algorithm that was able"}, {"start": 10.68, "end": 16.84, "text": " to look at a database with images of birds, and it could learn about them so much that"}, {"start": 16.84, "end": 22.32, "text": " we could provide a text description of an imaginary bird type and it would dream up new"}, {"start": 22.32, "end": 23.8, "text": " images of them."}, {"start": 23.8, "end": 28.8, "text": " It was a truly breathtaking piece of work, and its main limitation was that it could only"}, {"start": 28.8, "end": 31.0, "text": " come up with course images."}, {"start": 31.0, "end": 33.120000000000005, "text": " It didn't give us a lot of details."}, {"start": 33.120000000000005, "end": 38.760000000000005, "text": " Later, we talked about Nvidia's algorithm that started out with such a course image, but"}, {"start": 38.760000000000005, "end": 40.08, "text": " didn't stop there."}, {"start": 40.08, "end": 46.2, "text": " It progressively recomputed this image many times, each time with more and more details."}, {"start": 46.2, "end": 50.72, "text": " This was able to create imaginary celebrities with tons of detail."}, {"start": 50.72, "end": 55.36, "text": " This new work offers a number of valuable improvements over the previous techniques."}, {"start": 55.36, "end": 60.68, "text": " It can train bigger neural networks with even more parameters, create extremely detailed"}, {"start": 60.68, "end": 66.48, "text": " images with remarkable performance, so much so that if you have a reasonably powerful graphics"}, {"start": 66.48, "end": 68.88, "text": " card, you can run it yourself here."}, {"start": 68.88, "end": 71.24, "text": " The link is in the video description."}, {"start": 71.24, "end": 75.16, "text": " Training these neural networks is also more stable than it used to be with previous"}, {"start": 75.16, "end": 76.16, "text": " techniques."}, {"start": 76.16, "end": 81.48, "text": " As a result, it not only supports creating these absolutely beautiful images, but also gives"}, {"start": 81.48, "end": 86.28, "text": " us the opportunity to exert artistic control on the outputs."}, {"start": 86.28, "end": 87.80000000000001, "text": " I think this is super fun."}, {"start": 87.80000000000001, "end": 90.32000000000001, "text": " I could play with this all day long."}, {"start": 90.32000000000001, "end": 95.32000000000001, "text": " What's more, we can also interpolate between these images, which means that if we have desirable"}, {"start": 95.32000000000001, "end": 101.36, "text": " images A and B, it can compute intermediate images between them, and the challenging part"}, {"start": 101.36, "end": 106.56, "text": " is that these intermediate images shouldn't be some sort of average between the two, which"}, {"start": 106.56, "end": 111.48, "text": " would be gibberish, but they would have to be images that are meaningful and can stand"}, {"start": 111.48, "end": 112.84, "text": " on their own."}, {"start": 112.84, "end": 114.16, "text": " Look at this."}, {"start": 114.16, "end": 115.64, "text": " Flying colors."}, {"start": 115.64, "end": 117.56, "text": " And now comes the best part."}, {"start": 117.56, "end": 121.12, "text": " The results were measured in terms of their inception score."}, {"start": 121.12, "end": 127.08, "text": " This inception score defines how recognizable and diverse these generated images are,"}, {"start": 127.08, "end": 133.36, "text": " and most importantly, both of these are codified in a mathematical manner to reduce the subjectivity"}, {"start": 133.36, "end": 135.08, "text": " of the evaluation."}, {"start": 135.08, "end": 139.64000000000001, "text": " This score is not perfect by any means, but it typically correlates well with the scores"}, {"start": 139.64000000000001, "end": 141.28, "text": " given by humans."}, {"start": 141.28, "end": 145.4, "text": " The best of the earlier works had an inception score of around 50."}, {"start": 145.4, "end": 152.16000000000003, "text": " And hold on to your papers because the score of this new technique is no less than 166,"}, {"start": 152.16000000000003, "end": 157.4, "text": " and if we would measure real images, they would score around 233."}, {"start": 157.4, "end": 160.28, "text": " What an incredible leap in technology."}, {"start": 160.28, "end": 164.88000000000002, "text": " And we are even being paid for creating and playing with such learning algorithms."}, {"start": 164.88, "end": 166.68, "text": " What a time to be alive."}, {"start": 166.68, "end": 171.12, "text": " A big thumbs up for the authors of the paper for providing quite a bit of information on"}, {"start": 171.12, "end": 172.79999999999998, "text": " failure cases as well."}, {"start": 172.79999999999998, "end": 176.24, "text": " We also thank Incelico Medicine for supporting this video."}, {"start": 176.24, "end": 181.76, "text": " They are using these amazing learning algorithms to create new molecules, identify new protein"}, {"start": 181.76, "end": 186.96, "text": " targets with the aim to cure diseases, and aging itself."}, {"start": 186.96, "end": 189.4, "text": " Make sure to check them out in the video description."}, {"start": 189.4, "end": 193.2, "text": " They are our first sponsors, and it's been such a joy to work with them."}, {"start": 193.2, "end": 196.95999999999998, "text": " Thanks for watching, and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=V6G717ewUuw
Can an AI Learn To Draw a Caricature?
Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers The paper "CariGANs: Unpaired Photo-to-Caricature Translation" is available here: https://cari-gan.github.io/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-1162213/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute Papers with Karojona Ifaher. Style Transfer is an interesting problem in machine learning research where we have two input images, one for content and one for style, and the output is our content image reimagined with this new style. The cool part is that the content can be a photo straight from our camera and the style can be a painting which leads to super fun and really good looking results. This subfield is only a few years old and has seen a number of amazing papers. Style Transfer for HD images, videos and some of these forgeries were even able to make professional art curators think that they were painted by a real artist. So here's a crazy idea. How about using style transfer to create caricatures? Well, this sounds quite challenging. Just think about it. A caricature is an elusive art where certain human features are exaggerated and generally the human face needs to be simplified and boiled down into its essence. It is a very human thing to do. So how could possibly an AI be endowed with such a deep understanding of, for instance, a human face? That sounds almost impossible. Our suspicion is further reinforced as we look at how previous style transfer algorithms try to deal with this problem. Not too well, but no wonder it would be unfair to expect great results as this is not what they were designed for. But now, look at these truly incredible results that were made with this new work. The main difference between the older works and this one is that one, it uses generative adversarial networks, GANs, in short. This is an architecture where two neural networks learn together. One learns to generate better forgeries and the other learns to find out whether an image has been forged. However, this would still not create the results that you see here. An additional improvement is that we have not one, but two of these GANs. One deals with style. But it is trained in a way to keep the essence of the image. And the other deals with changing and warping the geometry of the image to achieve an artistic effect. This leans on the input of a landmark detector that gives it around 60 points that show the location of the most important parts of a human face. The output of this geometry, again, is a distorted version of this point set, which can then be used to warp the style image to obtain the final output. This is a great idea because the amount of distortion applied to the points can be controlled. So, we can tell the AI how crazy of a result we are looking for. Great! The authors also experimented applying this to video. In my opinion, the results are incredible for a first crack at this problem. We are probably just one paper away from an AI automatically creating absolutely mind-blowing caricature videos. Make sure to have a look at the paper as it has a ton more results, and of course, every element of the system is explained in great detail there. And if you enjoyed this episode and you would like to access all future videos in early access, or get your name immortalized in the video description as a key supporter, please consider supporting us on patreon.com slash 2-minute papers. The link is available in the video description. We were able to significantly improve our video editing rig, and this was possible because of your generous support. I am so grateful. Thank you so much. And this is why every episode ends with the usual quote, thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.0, "text": " Dear Fellow Scholars, this is two-minute Papers with Karojona Ifaher."}, {"start": 4.0, "end": 8.0, "text": " Style Transfer is an interesting problem in machine learning research"}, {"start": 8.0, "end": 13.0, "text": " where we have two input images, one for content and one for style,"}, {"start": 13.0, "end": 17.0, "text": " and the output is our content image reimagined with this new style."}, {"start": 17.0, "end": 22.0, "text": " The cool part is that the content can be a photo straight from our camera"}, {"start": 22.0, "end": 28.0, "text": " and the style can be a painting which leads to super fun and really good looking results."}, {"start": 28.0, "end": 33.0, "text": " This subfield is only a few years old and has seen a number of amazing papers."}, {"start": 33.0, "end": 44.0, "text": " Style Transfer for HD images, videos and some of these forgeries were even able to make professional art curators think that they were painted by a real artist."}, {"start": 44.0, "end": 51.0, "text": " So here's a crazy idea. How about using style transfer to create caricatures?"}, {"start": 51.0, "end": 55.0, "text": " Well, this sounds quite challenging. Just think about it."}, {"start": 55.0, "end": 60.0, "text": " A caricature is an elusive art where certain human features are exaggerated"}, {"start": 60.0, "end": 66.0, "text": " and generally the human face needs to be simplified and boiled down into its essence."}, {"start": 66.0, "end": 68.0, "text": " It is a very human thing to do."}, {"start": 68.0, "end": 76.0, "text": " So how could possibly an AI be endowed with such a deep understanding of, for instance, a human face?"}, {"start": 76.0, "end": 78.0, "text": " That sounds almost impossible."}, {"start": 78.0, "end": 85.0, "text": " Our suspicion is further reinforced as we look at how previous style transfer algorithms try to deal with this problem."}, {"start": 85.0, "end": 92.0, "text": " Not too well, but no wonder it would be unfair to expect great results as this is not what they were designed for."}, {"start": 92.0, "end": 97.0, "text": " But now, look at these truly incredible results that were made with this new work."}, {"start": 97.0, "end": 101.0, "text": " The main difference between the older works and this one is that one,"}, {"start": 101.0, "end": 105.0, "text": " it uses generative adversarial networks, GANs, in short."}, {"start": 105.0, "end": 109.0, "text": " This is an architecture where two neural networks learn together."}, {"start": 109.0, "end": 116.0, "text": " One learns to generate better forgeries and the other learns to find out whether an image has been forged."}, {"start": 116.0, "end": 120.0, "text": " However, this would still not create the results that you see here."}, {"start": 120.0, "end": 124.0, "text": " An additional improvement is that we have not one, but two of these GANs."}, {"start": 124.0, "end": 127.0, "text": " One deals with style."}, {"start": 127.0, "end": 131.0, "text": " But it is trained in a way to keep the essence of the image."}, {"start": 131.0, "end": 138.0, "text": " And the other deals with changing and warping the geometry of the image to achieve an artistic effect."}, {"start": 138.0, "end": 147.0, "text": " This leans on the input of a landmark detector that gives it around 60 points that show the location of the most important parts of a human face."}, {"start": 147.0, "end": 156.0, "text": " The output of this geometry, again, is a distorted version of this point set, which can then be used to warp the style image to obtain the final output."}, {"start": 156.0, "end": 162.0, "text": " This is a great idea because the amount of distortion applied to the points can be controlled."}, {"start": 162.0, "end": 168.0, "text": " So, we can tell the AI how crazy of a result we are looking for."}, {"start": 168.0, "end": 174.0, "text": " Great! The authors also experimented applying this to video."}, {"start": 174.0, "end": 179.0, "text": " In my opinion, the results are incredible for a first crack at this problem."}, {"start": 179.0, "end": 187.0, "text": " We are probably just one paper away from an AI automatically creating absolutely mind-blowing caricature videos."}, {"start": 187.0, "end": 196.0, "text": " Make sure to have a look at the paper as it has a ton more results, and of course, every element of the system is explained in great detail there."}, {"start": 196.0, "end": 206.0, "text": " And if you enjoyed this episode and you would like to access all future videos in early access, or get your name immortalized in the video description as a key supporter,"}, {"start": 206.0, "end": 211.0, "text": " please consider supporting us on patreon.com slash 2-minute papers."}, {"start": 211.0, "end": 214.0, "text": " The link is available in the video description."}, {"start": 214.0, "end": 220.0, "text": " We were able to significantly improve our video editing rig, and this was possible because of your generous support."}, {"start": 220.0, "end": 223.0, "text": " I am so grateful. Thank you so much."}, {"start": 223.0, "end": 226.0, "text": " And this is why every episode ends with the usual quote,"}, {"start": 226.0, "end": 236.0, "text": " thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=AGm3hF_BlYM
This AI Learns Human Movement From Videos
The paper "Towards Learning a Realistic Rendering of Human Behavior" is available here: https://compvis.github.io/hbugen2018/ Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-1972569/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir, in a previous episode we discussed a technique where we could specify a low-quality image of a test subject and a photo of a different person. What happened then is that the algorithm transformed our test subject into that pose. With another algorithm, we can transfer our facial gestures onto a different target subject. And this new method does something completely different. Here we can copy a full body movement from a video and transfer it onto a target person. This way we can appear to be playing tennis, baseball, or finally be able to perform a hundred chin-ups. Well, at least on Instagram. Now look here. Up here you see the target poses and on the left the target subjects. And between them we see the output of this algorithm with the target subjects taking these poses. As you see, the algorithm is quite consistent in a sense that during walking we often encounter the same pose which results in a very similar image. That's exactly the kind of consistency that we are looking for. Remarkably, this algorithm is also able to synthesize angles of these target subjects that it hadn't seen before. For instance, the backside of this person was never shown to the algorithm and it correctly guesses interesting details like the belt of this character to continue around the waist. Really cool, I love it. We can also put these characters in a virtual environment and animate them there. Now this work, like most papers that explore something completely new, is raw and experimental. Now clearly there are issues with the occlusions, flickering and the silhouettes of the characters give the trick away. Anyone looking at this footage can tell in a second that it is not real. The reason I am so excited about this is that now we finally see that this is a viable concept and it will provide fertile grounds for new follow-up research works to be improved upon. Two more papers down the line, it will probably work in HD and look significantly better. Just imagine how amazing that would be for movies, computer games and telepresence applications. Sign me up. And computer graphics research has a vast body of papers on how to illuminate these characters to appear as if they were really there in this environment. Will this be done with computer graphics or through AI? I am really keen to see how these fields will come together to solve such a challenging problem. What a time to be alive. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.8, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fahir, in a previous episode"}, {"start": 5.8, "end": 11.48, "text": " we discussed a technique where we could specify a low-quality image of a test subject and"}, {"start": 11.48, "end": 13.8, "text": " a photo of a different person."}, {"start": 13.8, "end": 19.28, "text": " What happened then is that the algorithm transformed our test subject into that pose."}, {"start": 19.28, "end": 25.400000000000002, "text": " With another algorithm, we can transfer our facial gestures onto a different target subject."}, {"start": 25.400000000000002, "end": 29.04, "text": " And this new method does something completely different."}, {"start": 29.04, "end": 35.92, "text": " Here we can copy a full body movement from a video and transfer it onto a target person."}, {"start": 35.92, "end": 42.24, "text": " This way we can appear to be playing tennis, baseball, or finally be able to perform a hundred"}, {"start": 42.24, "end": 43.24, "text": " chin-ups."}, {"start": 43.24, "end": 46.8, "text": " Well, at least on Instagram."}, {"start": 46.8, "end": 48.32, "text": " Now look here."}, {"start": 48.32, "end": 53.04, "text": " Up here you see the target poses and on the left the target subjects."}, {"start": 53.04, "end": 59.4, "text": " And between them we see the output of this algorithm with the target subjects taking these poses."}, {"start": 59.4, "end": 65.16, "text": " As you see, the algorithm is quite consistent in a sense that during walking we often encounter"}, {"start": 65.16, "end": 69.24, "text": " the same pose which results in a very similar image."}, {"start": 69.24, "end": 73.0, "text": " That's exactly the kind of consistency that we are looking for."}, {"start": 73.0, "end": 78.36, "text": " Remarkably, this algorithm is also able to synthesize angles of these target subjects"}, {"start": 78.36, "end": 80.56, "text": " that it hadn't seen before."}, {"start": 80.56, "end": 85.60000000000001, "text": " For instance, the backside of this person was never shown to the algorithm and it correctly"}, {"start": 85.60000000000001, "end": 91.92, "text": " guesses interesting details like the belt of this character to continue around the waist."}, {"start": 91.92, "end": 93.84, "text": " Really cool, I love it."}, {"start": 93.84, "end": 98.48, "text": " We can also put these characters in a virtual environment and animate them there."}, {"start": 98.48, "end": 105.4, "text": " Now this work, like most papers that explore something completely new, is raw and experimental."}, {"start": 105.4, "end": 111.2, "text": " Now clearly there are issues with the occlusions, flickering and the silhouettes of the characters"}, {"start": 111.2, "end": 112.72, "text": " give the trick away."}, {"start": 112.72, "end": 117.08000000000001, "text": " Anyone looking at this footage can tell in a second that it is not real."}, {"start": 117.08000000000001, "end": 122.84, "text": " The reason I am so excited about this is that now we finally see that this is a viable"}, {"start": 122.84, "end": 128.04000000000002, "text": " concept and it will provide fertile grounds for new follow-up research works to be improved"}, {"start": 128.04000000000002, "end": 129.04000000000002, "text": " upon."}, {"start": 129.04000000000002, "end": 134.88, "text": " Two more papers down the line, it will probably work in HD and look significantly better."}, {"start": 134.88, "end": 141.6, "text": " Just imagine how amazing that would be for movies, computer games and telepresence applications."}, {"start": 141.6, "end": 142.84, "text": " Sign me up."}, {"start": 142.84, "end": 148.56, "text": " And computer graphics research has a vast body of papers on how to illuminate these characters"}, {"start": 148.56, "end": 152.35999999999999, "text": " to appear as if they were really there in this environment."}, {"start": 152.35999999999999, "end": 155.72, "text": " Will this be done with computer graphics or through AI?"}, {"start": 155.72, "end": 160.6, "text": " I am really keen to see how these fields will come together to solve such a challenging"}, {"start": 160.6, "end": 161.6, "text": " problem."}, {"start": 161.6, "end": 163.16, "text": " What a time to be alive."}, {"start": 163.16, "end": 167.12, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=CIDRdLOWrXQ
Building a Curious AI With Random Network Distillation
This episode was supported by insilico.com. "Anything outside life extension is a complete waste of time". See their papers: - Papers: https://www.ncbi.nlm.nih.gov/pubmed/?term=Zhavoronkov%2Ba - Website: http://insilico.com/ The paper "Exploration by Random Network Distillation" is available here: Blog post: https://blog.openai.com/reinforcement-learning-with-prediction-based-rewards/ Paper: https://arxiv.org/abs/1810.12894 Code: https://github.com/openai/random-network-distillation We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://www.youtube.com/watch?v=4DdoZsOb53s Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #distillation
Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir. In a previous episode, we talked about a class of learning algorithms that were endowed with curiosity. This new work also showcases a curious AI that aims to solve Montezuma's revenge, which is a notoriously difficult platform game for an AI to finish. The main part of the difficulty arises from the fact that the AI needs to be able to plan for longer time periods, and interestingly, it also needs to learn that short-term rewards don't necessarily mean long-term success. Let's have a look at an example. Quoting the authors. There are four keys and six doors spread throughout the level. Any of the four keys can open any of the six doors, but are consumed in the process. To open the final two doors, the agent must therefore forego opening two of the doors that are easier to find, and that would immediately reward it for opening them. So what this means is that we have a tricky situation, because the agent would have to disregard the fact that it is getting a nice score from opening the doors and understand that these keys can be saved for later. This is very hard for an AI to resist, and, again, curiosity comes to the rescue. Curiosity, at least this particular definition of it, works in a way that the harder the guess for the AI, what will happen, the more excited it gets to perform an action. This drives the agent to finish the game and explore as much as possible, because it is curious to see what the next level holds. You see in the animation here that the big reward spikes show that the AI has found something new and meaningful, like losing a life, or narrowly avoiding an adversary. As you also see, climbing a ladder is a predictable, boring mechanic that the AI is not very excited about. Later, it becomes able to predict the results even better, the second and third time around, therefore it gets even less excited about ladders. This other animation shows how this curious agent explores adjacent rooms over time. This work also introduces a technique that the authors call random network distillation. This means that we start out from a completely randomly initialized, untrained neural network, and over time, slowly distill it into a trained one. This distillation also makes our neural network immune to the noisy TV problem from our previous episode, where our curious, unassuming agent would get stuck in front of a TV that continually plays new content. It also takes into consideration the score reported by the game and has an internal motivation to explore as well. And hold on to your papers because it can not only perform well in the game, but this AI is able to perform better than the average human. And again, remember that no-grandtruth knowledge is required, it was never demonstrated to the AI how one should play this game. Very impressive results indeed, and as you see, the pace of progress in machine learning research is nothing short of incredible. Make sure to have a look at the paper in the video description for more details. We'd also like to send a big thank you to Insili Co-Medicine for supporting this video. They use AI for research on preventing aging, believe it or not, and are doing absolutely amazing work. Make sure to check them out in the video description. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Zsolnai-Fehir."}, {"start": 4.28, "end": 8.88, "text": " In a previous episode, we talked about a class of learning algorithms that were endowed"}, {"start": 8.88, "end": 10.44, "text": " with curiosity."}, {"start": 10.44, "end": 16.18, "text": " This new work also showcases a curious AI that aims to solve Montezuma's revenge, which"}, {"start": 16.18, "end": 20.12, "text": " is a notoriously difficult platform game for an AI to finish."}, {"start": 20.12, "end": 25.16, "text": " The main part of the difficulty arises from the fact that the AI needs to be able to plan"}, {"start": 25.16, "end": 31.12, "text": " for longer time periods, and interestingly, it also needs to learn that short-term rewards"}, {"start": 31.12, "end": 33.64, "text": " don't necessarily mean long-term success."}, {"start": 33.64, "end": 35.68, "text": " Let's have a look at an example."}, {"start": 35.68, "end": 37.2, "text": " Quoting the authors."}, {"start": 37.2, "end": 41.08, "text": " There are four keys and six doors spread throughout the level."}, {"start": 41.08, "end": 46.32, "text": " Any of the four keys can open any of the six doors, but are consumed in the process."}, {"start": 46.32, "end": 51.8, "text": " To open the final two doors, the agent must therefore forego opening two of the doors"}, {"start": 51.8, "end": 56.44, "text": " that are easier to find, and that would immediately reward it for opening them."}, {"start": 56.44, "end": 60.68, "text": " So what this means is that we have a tricky situation, because the agent would have to"}, {"start": 60.68, "end": 66.44, "text": " disregard the fact that it is getting a nice score from opening the doors and understand"}, {"start": 66.44, "end": 69.16, "text": " that these keys can be saved for later."}, {"start": 69.16, "end": 75.96, "text": " This is very hard for an AI to resist, and, again, curiosity comes to the rescue."}, {"start": 75.96, "end": 81.16, "text": " Curiosity, at least this particular definition of it, works in a way that the harder the"}, {"start": 81.16, "end": 86.6, "text": " guess for the AI, what will happen, the more excited it gets to perform an action."}, {"start": 86.6, "end": 91.6, "text": " This drives the agent to finish the game and explore as much as possible, because it is"}, {"start": 91.6, "end": 94.88, "text": " curious to see what the next level holds."}, {"start": 94.88, "end": 99.64, "text": " You see in the animation here that the big reward spikes show that the AI has found something"}, {"start": 99.64, "end": 105.67999999999999, "text": " new and meaningful, like losing a life, or narrowly avoiding an adversary."}, {"start": 105.68, "end": 112.04, "text": " As you also see, climbing a ladder is a predictable, boring mechanic that the AI is not very excited"}, {"start": 112.04, "end": 113.04, "text": " about."}, {"start": 113.04, "end": 119.0, "text": " Later, it becomes able to predict the results even better, the second and third time around,"}, {"start": 119.0, "end": 122.52000000000001, "text": " therefore it gets even less excited about ladders."}, {"start": 122.52000000000001, "end": 128.64000000000001, "text": " This other animation shows how this curious agent explores adjacent rooms over time."}, {"start": 128.64000000000001, "end": 134.08, "text": " This work also introduces a technique that the authors call random network distillation."}, {"start": 134.08, "end": 139.36, "text": " This means that we start out from a completely randomly initialized, untrained neural network,"}, {"start": 139.36, "end": 143.44, "text": " and over time, slowly distill it into a trained one."}, {"start": 143.44, "end": 149.0, "text": " This distillation also makes our neural network immune to the noisy TV problem from our previous"}, {"start": 149.0, "end": 155.28, "text": " episode, where our curious, unassuming agent would get stuck in front of a TV that continually"}, {"start": 155.28, "end": 157.36, "text": " plays new content."}, {"start": 157.36, "end": 162.8, "text": " It also takes into consideration the score reported by the game and has an internal motivation"}, {"start": 162.8, "end": 164.32000000000002, "text": " to explore as well."}, {"start": 164.32000000000002, "end": 169.12, "text": " And hold on to your papers because it can not only perform well in the game, but this"}, {"start": 169.12, "end": 173.48000000000002, "text": " AI is able to perform better than the average human."}, {"start": 173.48000000000002, "end": 178.8, "text": " And again, remember that no-grandtruth knowledge is required, it was never demonstrated to the"}, {"start": 178.8, "end": 181.76000000000002, "text": " AI how one should play this game."}, {"start": 181.76000000000002, "end": 186.4, "text": " Very impressive results indeed, and as you see, the pace of progress in machine learning"}, {"start": 186.4, "end": 189.12, "text": " research is nothing short of incredible."}, {"start": 189.12, "end": 192.88, "text": " Make sure to have a look at the paper in the video description for more details."}, {"start": 192.88, "end": 197.8, "text": " We'd also like to send a big thank you to Insili Co-Medicine for supporting this video."}, {"start": 197.8, "end": 203.44, "text": " They use AI for research on preventing aging, believe it or not, and are doing absolutely"}, {"start": 203.44, "end": 204.76, "text": " amazing work."}, {"start": 204.76, "end": 207.08, "text": " Make sure to check them out in the video description."}, {"start": 207.08, "end": 219.28, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=ozUzomVQsWc
This AI Learns Acrobatics by Watching YouTube
This episode was supported by insilico.com. "Anything outside life extension is a complete waste of time". See their papers: - Papers: https://www.ncbi.nlm.nih.gov/pubmed/?term=Zhavoronkov%2Ba - Website: http://insilico.com/ The paper "SFV: Reinforcement Learning of Physical Skills from Videos" is available here: 1. https://xbpeng.github.io/projects/SFV/index.html 2. https://bair.berkeley.edu/blog/2018/10/09/sfv/ Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fehir. If we have an animation movie or a computer game where, like in any other digital medium, we wish to have high quality, lifelike animations for our characters, we likely have to use motion capture. Motion capture means that we put an actor in a studio and we ask this person to perform cartwheels and other motion types that we wish to transfer to our virtual characters. This works really well, but recording and cleaning all this data is a very expensive and laborious process. As we are entering the age of AI, of course, I wonder if there is a better way to do this. Just think about it. We have no shortage of videos here on YouTube about people performing cartwheels and other moves and we have a bunch of learning algorithms that know what pose they are taking during the video. Surely we can make something happen here, right? Well, yes and no. A few methods already exist to perform this, but all of them have deal-breaking drawbacks. For instance, these previous work predicts the body poses for each frame, but each of them have small individual inaccuracies that produce this annoying flickering effect. Researchers like to refer to this as the lack of temporal coherence. But this new technique is able to remedy this. Great result. This new work also boasts a long list of other incredible improvements. For instance, the resulting motions are also simulated in a virtual environment and it is shown that they are quite robust. So much so that we can throw a bunch of boxes against the AI and it can still adjust to it. Kind of. These motions can be retargeted to different body shapes. You can see as it is demonstrated here, quite aptly, with this neat little nod to Boston Dynamics. It can also adapt to challenging new environments or get this. It can even work from a single photo instead of a video by completing the motion seen within. What kind of wizardry is that? How could it possibly perform that? First, we take an input photo or video and perform pose estimation on it. But this is still a per frame computation and you remember that this doesn't give us temporal consistency. This motion reconstruction step ensures that we have smooth transitions between the poses. And now comes the best part. We start simulating a virtual environment where a digital character tries to move its body parts to perform these actions. Which we do this, we can not only reproduce these motions, but also continue them. This is where the wizard relies. If you read the paper, which you should absolutely do, you will see that it uses OpenAI's amazing proximal policy optimization algorithm to find the best motions. Absolutely amazing. So this can perform and complete a variety of motions, adapt to more challenging landscapes and do all this in a temporarily smooth manner. However, the Gangnam Style Dance still proves to be too hard. The technology is not there yet. We also thank Insilico Medicine for supporting this video. They work on AI-based drug discovery and aging research. They have some unbelievable papers on these topics. Make sure to check them out and this paper as well in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.72, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karo Zsolnai-Fehir."}, {"start": 4.72, "end": 10.48, "text": " If we have an animation movie or a computer game where, like in any other digital medium,"}, {"start": 10.48, "end": 15.8, "text": " we wish to have high quality, lifelike animations for our characters, we likely have to use"}, {"start": 15.8, "end": 17.32, "text": " motion capture."}, {"start": 17.32, "end": 22.080000000000002, "text": " Motion capture means that we put an actor in a studio and we ask this person to perform"}, {"start": 22.080000000000002, "end": 27.400000000000002, "text": " cartwheels and other motion types that we wish to transfer to our virtual characters."}, {"start": 27.4, "end": 33.4, "text": " This works really well, but recording and cleaning all this data is a very expensive and"}, {"start": 33.4, "end": 35.08, "text": " laborious process."}, {"start": 35.08, "end": 41.16, "text": " As we are entering the age of AI, of course, I wonder if there is a better way to do this."}, {"start": 41.16, "end": 42.16, "text": " Just think about it."}, {"start": 42.16, "end": 47.04, "text": " We have no shortage of videos here on YouTube about people performing cartwheels and other"}, {"start": 47.04, "end": 52.879999999999995, "text": " moves and we have a bunch of learning algorithms that know what pose they are taking during"}, {"start": 52.879999999999995, "end": 53.879999999999995, "text": " the video."}, {"start": 53.879999999999995, "end": 57.36, "text": " Surely we can make something happen here, right?"}, {"start": 57.36, "end": 59.88, "text": " Well, yes and no."}, {"start": 59.88, "end": 65.24, "text": " A few methods already exist to perform this, but all of them have deal-breaking drawbacks."}, {"start": 65.24, "end": 70.44, "text": " For instance, these previous work predicts the body poses for each frame, but each of"}, {"start": 70.44, "end": 76.72, "text": " them have small individual inaccuracies that produce this annoying flickering effect."}, {"start": 76.72, "end": 80.68, "text": " Researchers like to refer to this as the lack of temporal coherence."}, {"start": 80.68, "end": 84.0, "text": " But this new technique is able to remedy this."}, {"start": 84.0, "end": 85.68, "text": " Great result."}, {"start": 85.68, "end": 90.80000000000001, "text": " This new work also boasts a long list of other incredible improvements."}, {"start": 90.80000000000001, "end": 95.64000000000001, "text": " For instance, the resulting motions are also simulated in a virtual environment and it"}, {"start": 95.64000000000001, "end": 98.32000000000001, "text": " is shown that they are quite robust."}, {"start": 98.32000000000001, "end": 103.52000000000001, "text": " So much so that we can throw a bunch of boxes against the AI and it can still adjust to"}, {"start": 103.52000000000001, "end": 108.16000000000001, "text": " it."}, {"start": 108.16000000000001, "end": 109.28, "text": " Kind of."}, {"start": 109.28, "end": 112.76, "text": " These motions can be retargeted to different body shapes."}, {"start": 112.76, "end": 117.56, "text": " You can see as it is demonstrated here, quite aptly, with this neat little nod to Boston"}, {"start": 117.56, "end": 118.88000000000001, "text": " Dynamics."}, {"start": 118.88000000000001, "end": 128.44, "text": " It can also adapt to challenging new environments or get this."}, {"start": 128.44, "end": 135.88, "text": " It can even work from a single photo instead of a video by completing the motion seen within."}, {"start": 135.88, "end": 138.24, "text": " What kind of wizardry is that?"}, {"start": 138.24, "end": 140.56, "text": " How could it possibly perform that?"}, {"start": 140.56, "end": 145.6, "text": " First, we take an input photo or video and perform pose estimation on it."}, {"start": 145.6, "end": 150.36, "text": " But this is still a per frame computation and you remember that this doesn't give us temporal"}, {"start": 150.36, "end": 151.68, "text": " consistency."}, {"start": 151.68, "end": 157.12, "text": " This motion reconstruction step ensures that we have smooth transitions between the poses."}, {"start": 157.12, "end": 159.28, "text": " And now comes the best part."}, {"start": 159.28, "end": 164.64000000000001, "text": " We start simulating a virtual environment where a digital character tries to move its body"}, {"start": 164.64000000000001, "end": 167.24, "text": " parts to perform these actions."}, {"start": 167.24, "end": 172.76000000000002, "text": " Which we do this, we can not only reproduce these motions, but also continue them."}, {"start": 172.76000000000002, "end": 174.76000000000002, "text": " This is where the wizard relies."}, {"start": 174.76000000000002, "end": 180.44, "text": " If you read the paper, which you should absolutely do, you will see that it uses OpenAI's amazing"}, {"start": 180.44, "end": 185.84, "text": " proximal policy optimization algorithm to find the best motions."}, {"start": 185.84, "end": 187.0, "text": " Absolutely amazing."}, {"start": 187.0, "end": 193.52, "text": " So this can perform and complete a variety of motions, adapt to more challenging landscapes"}, {"start": 193.52, "end": 197.04000000000002, "text": " and do all this in a temporarily smooth manner."}, {"start": 197.04, "end": 200.76, "text": " However, the Gangnam Style Dance still proves to be too hard."}, {"start": 200.76, "end": 202.76, "text": " The technology is not there yet."}, {"start": 202.76, "end": 206.23999999999998, "text": " We also thank Insilico Medicine for supporting this video."}, {"start": 206.23999999999998, "end": 210.16, "text": " They work on AI-based drug discovery and aging research."}, {"start": 210.16, "end": 213.23999999999998, "text": " They have some unbelievable papers on these topics."}, {"start": 213.23999999999998, "end": 216.76, "text": " Make sure to check them out and this paper as well in the video description."}, {"start": 216.76, "end": 228.84, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=fzuYEStsQxc
This Curious AI Beats Many Games...and Gets Addicted to the TV
Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh › Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg The paper "Large-Scale Study of Curiosity-Driven Learning" is available here: Paper - https://pathak22.github.io/large-scale-curiosity/ Blog post - https://blog.openai.com/reinforcement-learning-with-prediction-based-rewards/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-3774381/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute Papers with Karo Jornaifahir. Reinforcement learning is a learning algorithm that chooses a set of actions in an environment to maximize a score. This class of techniques enables us to train an AI to master video games, avoiding obstacles with a drone, cleaning up a table with a robot arm, and has many more really cool applications. We use the word score and reward interchangeably, and the goal is that over time the agent has to learn to maximize a prescribed reward. So, where should the rewards come from? Most techniques work by using extrinsic rewards. Extrinsic rewards are only a half solution as they need to come from somewhere, either from the game in the form of a game score, which simply isn't present in every game. And even if it is present in a game, it is very different for Atari Breakout and for instance a strategy game. Intrinsic rewards are designed to come to the rescue, so the AI would be able to completely ignore the in-game score, and somehow have some sort of inner motivation to drive an AI to complete a level. But what could possibly be a good intrinsic reward that would work well on a variety of tasks? Shouldn't this be different from problem to problem? If so, we are back to square one. If we are to call our learner intelligent, then we need one algorithm that is able to solve a large number of different problems. If we need to reprogram it for every game, that's just a narrow intelligence. So, a key finding of this paper is that we can end out the AI with a very human-like property. Curiosity. Human babies also explore the world out of curiosity and as a happy side effect, learn a lot of useful skills to navigate in this world later. However, as in our everyday speech, the definition of curiosity is a little nebulous. We have to provide a mathematical definition for it. In this work, this is defined as trying to maximize the number of surprises. This will drive the learner to favor actions that lead to unexplored regions and complex dynamics in a game. So, how do these curious agents fare? Well, quite good. In Pong, when the agent plays against itself, it will end up in long matches passing the ball between the two paddles. How about bowling? Well, I cannot resist but quote the authors for this one. The agent learned to play the game better than agents trained to maximize the clipped extrinsic reward directly. We think this is because the agent gets attracted to the difficult to predict flashing of the scoreboard occurring after the strikes. With a little stretch, one could perhaps say that this AI is showing signs of addiction. I wonder how it would do with modern mobile games with loot boxes. But, we'll leave that for future work now. How about Super Mario? Well, the agent is very curious to see how the levels continue, so it learns all the necessary skills to beat the game. Incredible. However, the more seasoned fellow scholars immediately find that there is a catch. What if we sit down the AI in front of a TV that constantly plays new material? You may think that this is some kind of a joke, but it's not. It is a perfectly valid issue because due to its curiosity, the AI would have to stay there forever and not start exploring the level. This is the good old definition of TV addiction. Talk about human-like properties. And sure enough, as soon as we turn off the TV, the agent gets to work immediately. Who would have thought? The paper notes that this challenge needs to be dealt with over time. However, the algorithm was tested on a large variety of problems and it did not come up in practice. And the key insight is that curiosity is not only a great replacement for extrinsic rewards, the two are often aligned, but curiosity in some cases is even superior to that. This is an amazing value proposition for something that we can run on any problem without any additional work. So, curious agents that are addicted to flashing score screens and TVs. What a time to be alive. And if you enjoyed this episode and you wish to help us on our quest to inform even more people about these amazing stories, please consider supporting us on patreon.com slash two-minute papers. You can pick up cool perks there to keep your papers addiction in check. As always, there is a link to it and to the paper in the video description. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.28, "text": " Dear Fellow Scholars, this is two-minute Papers with Karo Jornaifahir."}, {"start": 4.28, "end": 11.32, "text": " Reinforcement learning is a learning algorithm that chooses a set of actions in an environment to maximize a score."}, {"start": 11.32, "end": 16.92, "text": " This class of techniques enables us to train an AI to master video games,"}, {"start": 16.92, "end": 24.32, "text": " avoiding obstacles with a drone, cleaning up a table with a robot arm, and has many more really cool applications."}, {"start": 24.32, "end": 33.6, "text": " We use the word score and reward interchangeably, and the goal is that over time the agent has to learn to maximize a prescribed reward."}, {"start": 33.6, "end": 36.120000000000005, "text": " So, where should the rewards come from?"}, {"start": 36.120000000000005, "end": 39.44, "text": " Most techniques work by using extrinsic rewards."}, {"start": 39.44, "end": 44.08, "text": " Extrinsic rewards are only a half solution as they need to come from somewhere,"}, {"start": 44.08, "end": 49.2, "text": " either from the game in the form of a game score, which simply isn't present in every game."}, {"start": 49.2, "end": 55.720000000000006, "text": " And even if it is present in a game, it is very different for Atari Breakout and for instance a strategy game."}, {"start": 55.720000000000006, "end": 63.24, "text": " Intrinsic rewards are designed to come to the rescue, so the AI would be able to completely ignore the in-game score,"}, {"start": 63.24, "end": 68.96000000000001, "text": " and somehow have some sort of inner motivation to drive an AI to complete a level."}, {"start": 68.96000000000001, "end": 74.84, "text": " But what could possibly be a good intrinsic reward that would work well on a variety of tasks?"}, {"start": 74.84, "end": 77.64, "text": " Shouldn't this be different from problem to problem?"}, {"start": 77.64, "end": 80.04, "text": " If so, we are back to square one."}, {"start": 80.04, "end": 88.2, "text": " If we are to call our learner intelligent, then we need one algorithm that is able to solve a large number of different problems."}, {"start": 88.2, "end": 92.92, "text": " If we need to reprogram it for every game, that's just a narrow intelligence."}, {"start": 92.92, "end": 99.88, "text": " So, a key finding of this paper is that we can end out the AI with a very human-like property."}, {"start": 99.88, "end": 101.08, "text": " Curiosity."}, {"start": 101.08, "end": 110.52, "text": " Human babies also explore the world out of curiosity and as a happy side effect, learn a lot of useful skills to navigate in this world later."}, {"start": 110.52, "end": 115.8, "text": " However, as in our everyday speech, the definition of curiosity is a little nebulous."}, {"start": 115.8, "end": 118.92, "text": " We have to provide a mathematical definition for it."}, {"start": 118.92, "end": 124.03999999999999, "text": " In this work, this is defined as trying to maximize the number of surprises."}, {"start": 124.04, "end": 131.16, "text": " This will drive the learner to favor actions that lead to unexplored regions and complex dynamics in a game."}, {"start": 131.16, "end": 133.96, "text": " So, how do these curious agents fare?"}, {"start": 133.96, "end": 135.88, "text": " Well, quite good."}, {"start": 135.88, "end": 145.0, "text": " In Pong, when the agent plays against itself, it will end up in long matches passing the ball between the two paddles."}, {"start": 145.0, "end": 146.44, "text": " How about bowling?"}, {"start": 146.44, "end": 150.36, "text": " Well, I cannot resist but quote the authors for this one."}, {"start": 150.36, "end": 157.88000000000002, "text": " The agent learned to play the game better than agents trained to maximize the clipped extrinsic reward directly."}, {"start": 157.88000000000002, "end": 166.44000000000003, "text": " We think this is because the agent gets attracted to the difficult to predict flashing of the scoreboard occurring after the strikes."}, {"start": 166.44000000000003, "end": 172.60000000000002, "text": " With a little stretch, one could perhaps say that this AI is showing signs of addiction."}, {"start": 172.60000000000002, "end": 176.60000000000002, "text": " I wonder how it would do with modern mobile games with loot boxes."}, {"start": 176.60000000000002, "end": 179.0, "text": " But, we'll leave that for future work now."}, {"start": 179.0, "end": 180.68, "text": " How about Super Mario?"}, {"start": 180.68, "end": 188.6, "text": " Well, the agent is very curious to see how the levels continue, so it learns all the necessary skills to beat the game."}, {"start": 188.6, "end": 190.04, "text": " Incredible."}, {"start": 190.04, "end": 195.24, "text": " However, the more seasoned fellow scholars immediately find that there is a catch."}, {"start": 195.24, "end": 201.8, "text": " What if we sit down the AI in front of a TV that constantly plays new material?"}, {"start": 201.8, "end": 205.08, "text": " You may think that this is some kind of a joke, but it's not."}, {"start": 205.08, "end": 211.56, "text": " It is a perfectly valid issue because due to its curiosity, the AI would have to stay there forever"}, {"start": 211.56, "end": 213.8, "text": " and not start exploring the level."}, {"start": 213.8, "end": 217.08, "text": " This is the good old definition of TV addiction."}, {"start": 217.08, "end": 219.16000000000003, "text": " Talk about human-like properties."}, {"start": 219.16000000000003, "end": 224.52, "text": " And sure enough, as soon as we turn off the TV, the agent gets to work immediately."}, {"start": 224.52, "end": 225.72000000000003, "text": " Who would have thought?"}, {"start": 225.72000000000003, "end": 229.56, "text": " The paper notes that this challenge needs to be dealt with over time."}, {"start": 229.56, "end": 235.72, "text": " However, the algorithm was tested on a large variety of problems and it did not come up in practice."}, {"start": 235.72, "end": 241.48, "text": " And the key insight is that curiosity is not only a great replacement for extrinsic rewards,"}, {"start": 241.48, "end": 247.72, "text": " the two are often aligned, but curiosity in some cases is even superior to that."}, {"start": 247.72, "end": 254.92000000000002, "text": " This is an amazing value proposition for something that we can run on any problem without any additional work."}, {"start": 254.92, "end": 260.28, "text": " So, curious agents that are addicted to flashing score screens and TVs."}, {"start": 260.28, "end": 262.03999999999996, "text": " What a time to be alive."}, {"start": 262.03999999999996, "end": 267.4, "text": " And if you enjoyed this episode and you wish to help us on our quest to inform even more people"}, {"start": 267.4, "end": 273.96, "text": " about these amazing stories, please consider supporting us on patreon.com slash two-minute papers."}, {"start": 273.96, "end": 277.8, "text": " You can pick up cool perks there to keep your papers addiction in check."}, {"start": 277.8, "end": 281.56, "text": " As always, there is a link to it and to the paper in the video description."}, {"start": 281.56, "end": 285.56, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=zL6ltnSKf9k
This AI Learned To Isolate Speech Signals
The paper "Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation " is available here: https://looking-to-listen.github.io/ Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh › Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-3565815/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Paper Sweet Carro Joel Naifa here. This is a neural network-based technique that can perform audio-visual separation. Before we talk about what that is, I will tell you what it is not. It is not what we've seen in the previous episode where we could select a pixel and listen to it. Have a look. And now let's try to separate the sound of the cello and see if it knows where it comes from. This one is different. This new technique can clean up an audio signal by suppressing the noise in a busy bar, even if the source of the noise is not seen in the video. It can also enhance the voice of the speaker at the same time. Let's listen. So this task is given the video, any person who gets cleaned up, and everything else gets suppressed. So this task is given the video, any person who gets cleaned up, and everything else gets suppressed. Or, if we have a Skype meeting with someone in a lab or a busy office where multiple people are speaking nearby, we can also perform a similar speed separation, which would be a gut send for future meetings. So we've been trying to train this network to input two embedding as an output of three. Yeah, this is just an extra experiment for the paper. Hi guys. And I think if you are a parent, the utility of this example needs no further explanation. Hi, my name is I'm a I am not sure if I ever encountered the term screaming children in the evening. I am not sure if I ever encountered the term screaming children in the abstract of an AI paper. So that one was also a first here. This is a super difficult task because the AI needs to understand what lip motions correspond to what kind of sounds, which is different for all kinds of languages, age groups, and head positions. To this end, the authors put together a stupendously large data set with almost 300,000 videos with clean speech signals. This data set is then run through a multi-stream neural network that detects the number of human faces within the video, generates small thumbnails of them, and observes how they move over time. It also analyzes the audio signals separately, then fuses these elements together with the recurrent neural network to output the separated audio waveforms. A key advantage of this architecture and training method is that as opposed to many previous works, this is speaker independent. Therefore, we don't need specific training data from the speaker we want to use this on. This is a huge leap in terms of usability. The paper also contains an excellent demonstration of this concept by taking a piece of footage from Conan O'Brien's show where two comedians were booked for the same time slot and talk over each other. The result is a performance where it is near impossible to understand what they are saying, but with this technique, we can hear both of them one by one crystal clear. You see some results over here, but make sure to click the paper link in the description to hear the sound samples as well. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.6000000000000005, "text": " Dear Fellow Scholars, this is 2 Minute Paper Sweet Carro Joel Naifa here."}, {"start": 4.6000000000000005, "end": 9.96, "text": " This is a neural network-based technique that can perform audio-visual separation."}, {"start": 9.96, "end": 13.72, "text": " Before we talk about what that is, I will tell you what it is not."}, {"start": 13.72, "end": 18.48, "text": " It is not what we've seen in the previous episode where we could select a pixel and listen"}, {"start": 18.48, "end": 19.48, "text": " to it."}, {"start": 19.48, "end": 27.8, "text": " Have a look."}, {"start": 27.8, "end": 33.6, "text": " And now let's try to separate the sound of the cello and see if it knows where it comes"}, {"start": 33.6, "end": 43.0, "text": " from."}, {"start": 43.0, "end": 44.68, "text": " This one is different."}, {"start": 44.68, "end": 50.480000000000004, "text": " This new technique can clean up an audio signal by suppressing the noise in a busy bar, even"}, {"start": 50.480000000000004, "end": 53.88, "text": " if the source of the noise is not seen in the video."}, {"start": 53.88, "end": 58.120000000000005, "text": " It can also enhance the voice of the speaker at the same time."}, {"start": 58.120000000000005, "end": 59.32, "text": " Let's listen."}, {"start": 59.32, "end": 68.92, "text": " So this task is given the video, any person who gets cleaned up, and everything else gets"}, {"start": 68.92, "end": 69.92, "text": " suppressed."}, {"start": 69.92, "end": 82.4, "text": " So this task is given the video, any person who gets cleaned up, and everything else gets"}, {"start": 82.4, "end": 83.4, "text": " suppressed."}, {"start": 83.4, "end": 89.72, "text": " Or, if we have a Skype meeting with someone in a lab or a busy office where multiple people"}, {"start": 89.72, "end": 94.76, "text": " are speaking nearby, we can also perform a similar speed separation, which would be a"}, {"start": 94.76, "end": 97.24000000000001, "text": " gut send for future meetings."}, {"start": 97.24000000000001, "end": 102.2, "text": " So we've been trying to train this network to input two embedding as an output of three."}, {"start": 102.2, "end": 106.96000000000001, "text": " Yeah, this is just an extra experiment for the paper."}, {"start": 106.96000000000001, "end": 109.84, "text": " Hi guys."}, {"start": 109.84, "end": 119.56, "text": " And I think if you are a parent, the utility of this example needs no further explanation."}, {"start": 119.56, "end": 110.84, "text": " Hi, my name is"}, {"start": 110.84, "end": 127.04, "text": " I'm a"}, {"start": 127.04, "end": 152.04000000000002, "text": " I am not sure if I ever encountered the term screaming children in the evening."}, {"start": 152.04, "end": 157.92, "text": " I am not sure if I ever encountered the term screaming children in the abstract of an"}, {"start": 157.92, "end": 159.07999999999998, "text": " AI paper."}, {"start": 159.07999999999998, "end": 161.32, "text": " So that one was also a first here."}, {"start": 161.32, "end": 167.64, "text": " This is a super difficult task because the AI needs to understand what lip motions correspond"}, {"start": 167.64, "end": 173.28, "text": " to what kind of sounds, which is different for all kinds of languages, age groups, and"}, {"start": 173.28, "end": 174.72, "text": " head positions."}, {"start": 174.72, "end": 181.72, "text": " To this end, the authors put together a stupendously large data set with almost 300,000 videos"}, {"start": 181.72, "end": 183.6, "text": " with clean speech signals."}, {"start": 183.6, "end": 189.72, "text": " This data set is then run through a multi-stream neural network that detects the number of human"}, {"start": 189.72, "end": 195.72, "text": " faces within the video, generates small thumbnails of them, and observes how they move over"}, {"start": 195.72, "end": 196.72, "text": " time."}, {"start": 196.72, "end": 202.88, "text": " It also analyzes the audio signals separately, then fuses these elements together with the"}, {"start": 202.88, "end": 207.56, "text": " recurrent neural network to output the separated audio waveforms."}, {"start": 207.56, "end": 212.36, "text": " A key advantage of this architecture and training method is that as opposed to many previous"}, {"start": 212.36, "end": 215.68, "text": " works, this is speaker independent."}, {"start": 215.68, "end": 220.08, "text": " Therefore, we don't need specific training data from the speaker we want to use this"}, {"start": 220.08, "end": 221.08, "text": " on."}, {"start": 221.08, "end": 224.2, "text": " This is a huge leap in terms of usability."}, {"start": 224.2, "end": 229.48000000000002, "text": " The paper also contains an excellent demonstration of this concept by taking a piece of footage"}, {"start": 229.48000000000002, "end": 235.0, "text": " from Conan O'Brien's show where two comedians were booked for the same time slot and talk"}, {"start": 235.0, "end": 236.36, "text": " over each other."}, {"start": 236.36, "end": 241.44000000000003, "text": " The result is a performance where it is near impossible to understand what they are saying,"}, {"start": 241.44000000000003, "end": 246.32000000000002, "text": " but with this technique, we can hear both of them one by one crystal clear."}, {"start": 246.32000000000002, "end": 250.56, "text": " You see some results over here, but make sure to click the paper link in the description"}, {"start": 250.56, "end": 252.4, "text": " to hear the sound samples as well."}, {"start": 252.4, "end": 274.96, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=o-LU_Dja6Ks
This AI Shows Us the Sound of Pixels
The paper "The Sound of Pixels" is available here: http://sound-of-pixels.csail.mit.edu/ Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-1606337/ YouTube video credits: Old Wine - https://www.youtube.com/watch?v=bLjS6E6c0IA Fabian Rivero - https://www.youtube.com/watch?v=-HLTNgdajqw Michael Mikulka - https://www.youtube.com/watch?v=n8-2q4dheyU Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh › Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. This is a neural network-based method that is able to show us the sound of pixels. What this means is that it separates and localizes audio signals in videos. The two keywords are separation and localization, so let's take a look at these one by one. Localization means that we can pick a pixel in the image and it shows us the sound that comes from that location and the separation part means that ideally we only hear that particular sound source. Let's have a look at an example. Here's an input video. And now let's try to separate the sound of the cello and see if it knows where it comes from. Same with the guitar. Now for a trickier question. Even though there are sound reverberations of the walls, but the walls don't directly emit sound themselves, so I am hoping to hear nothing now. Let's see. Flat signal. Great. So how does this work? It is a neural network-based solution that has washed 60 hours of musical performances to be able to pull this off and it learns that a change in sound can often track back to a change in the video footage as a musician is playing an instrument. As a result, get this. No supervision is required. This means that we don't need to label this data or in other words, we don't need to specify how each pixel sounds. It learns to infer all this information from the video and sound signals by itself. This is huge and otherwise, just imagine how many work hours that would require to annotate all this data. And another cool application is that if we can separate these signals, then we can also independently adjust the sound of these instruments. Have a look. Now, clearly it is not perfect as some frequencies may bleed over from one instrument to the other, and there are also other methods to separate audio signals, but this particular one does not require any expertise, so I see a great value proposition there. If you wish to create a separate version of a video clip and use it for karaoke or just subtract the guitar and play it yourself, I would look no further. Also, you know the drill, this will be way better a couple of papers down the line. So, what do you think? What possible applications do you envision for this? Where could it be improved? Let me know below in the comments. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 5.0, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 5.0, "end": 11.040000000000001, "text": " This is a neural network-based method that is able to show us the sound of pixels."}, {"start": 11.040000000000001, "end": 16.36, "text": " What this means is that it separates and localizes audio signals in videos."}, {"start": 16.36, "end": 23.080000000000002, "text": " The two keywords are separation and localization, so let's take a look at these one by one."}, {"start": 23.080000000000002, "end": 28.080000000000002, "text": " Localization means that we can pick a pixel in the image and it shows us the sound that"}, {"start": 28.08, "end": 33.879999999999995, "text": " comes from that location and the separation part means that ideally we only hear that"}, {"start": 33.879999999999995, "end": 35.879999999999995, "text": " particular sound source."}, {"start": 35.879999999999995, "end": 37.519999999999996, "text": " Let's have a look at an example."}, {"start": 37.519999999999996, "end": 46.4, "text": " Here's an input video."}, {"start": 46.4, "end": 52.16, "text": " And now let's try to separate the sound of the cello and see if it knows where it comes"}, {"start": 52.16, "end": 61.4, "text": " from."}, {"start": 61.4, "end": 70.6, "text": " Same with the guitar."}, {"start": 70.6, "end": 72.92, "text": " Now for a trickier question."}, {"start": 72.92, "end": 77.44, "text": " Even though there are sound reverberations of the walls, but the walls don't directly"}, {"start": 77.44, "end": 82.08, "text": " emit sound themselves, so I am hoping to hear nothing now."}, {"start": 82.08, "end": 89.2, "text": " Let's see."}, {"start": 89.2, "end": 90.2, "text": " Flat signal."}, {"start": 90.2, "end": 91.52, "text": " Great."}, {"start": 91.52, "end": 93.52, "text": " So how does this work?"}, {"start": 93.52, "end": 98.88, "text": " It is a neural network-based solution that has washed 60 hours of musical performances"}, {"start": 98.88, "end": 104.64, "text": " to be able to pull this off and it learns that a change in sound can often track back"}, {"start": 104.64, "end": 109.52, "text": " to a change in the video footage as a musician is playing an instrument."}, {"start": 109.52, "end": 111.64, "text": " As a result, get this."}, {"start": 111.64, "end": 114.11999999999999, "text": " No supervision is required."}, {"start": 114.11999999999999, "end": 119.44, "text": " This means that we don't need to label this data or in other words, we don't need to specify"}, {"start": 119.44, "end": 121.39999999999999, "text": " how each pixel sounds."}, {"start": 121.39999999999999, "end": 127.96, "text": " It learns to infer all this information from the video and sound signals by itself."}, {"start": 127.96, "end": 133.72, "text": " This is huge and otherwise, just imagine how many work hours that would require to annotate"}, {"start": 133.72, "end": 135.6, "text": " all this data."}, {"start": 135.6, "end": 140.68, "text": " And another cool application is that if we can separate these signals, then we can also"}, {"start": 140.68, "end": 144.79999999999998, "text": " independently adjust the sound of these instruments."}, {"start": 144.79999999999998, "end": 154.79999999999998, "text": " Have a look."}, {"start": 154.8, "end": 170.16000000000003, "text": " Now, clearly it is not perfect as some frequencies may bleed over from one instrument to the other,"}, {"start": 170.16000000000003, "end": 175.0, "text": " and there are also other methods to separate audio signals, but this particular one does"}, {"start": 175.0, "end": 179.72000000000003, "text": " not require any expertise, so I see a great value proposition there."}, {"start": 179.72, "end": 185.16, "text": " If you wish to create a separate version of a video clip and use it for karaoke or just"}, {"start": 185.16, "end": 189.36, "text": " subtract the guitar and play it yourself, I would look no further."}, {"start": 189.36, "end": 194.72, "text": " Also, you know the drill, this will be way better a couple of papers down the line."}, {"start": 194.72, "end": 196.72, "text": " So, what do you think?"}, {"start": 196.72, "end": 199.64, "text": " What possible applications do you envision for this?"}, {"start": 199.64, "end": 201.04, "text": " Where could it be improved?"}, {"start": 201.04, "end": 202.6, "text": " Let me know below in the comments."}, {"start": 202.6, "end": 209.6, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=X1cPSvPagNI
Full-Time Papers, Maybe Someday?
Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh › Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/photo-1149962/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is 2 Minute Papers with Karoijona Ifehir. This video is not about a paper, but about the video series itself. I don't do this often, but for the sake of transparency, I wanted to make sure to tell you about this. We have recently hit 175,000 subscribers on the channel. I find this number almost unfathomable, and please know that this became a possibility only because of you, so I would like to let you know how grateful I am for your support here on YouTube and Patreon. As most of you know, I still work as a full-time researcher at the Technical University of Vienna, and I get the question, why not go full-time on 2 Minute Papers from you Fellow Scholars increasingly often? The answer is that over time, I would love to, but our current financial situation does not allow it. Let me explain that. First, I tried to run the channel solely on YouTube ads which led to a rude awakening. Most people, myself included, are very surprised when they hear that the rates had become so low that around the first 1 million viewer mark, the series earned less than a dollar a day. Then we introduced Patreon, and with your support, we are now able to buy proper equipment to make better videos for you. I can, without hesitation, say that you are the reason this channel can exist. Everything is better this way. Now we have two independent revenue sources, Patreon being the most important, however, whenever YouTube or Patreon monetization issues arise, you see many other channels disappearing into the ether. I am terrified of this, and I want to do everything I possibly can to make sure that this does not happen to us, and we can keep running the series for a long, long time. However, if anything happens to any of these revenue streams, simply put, we are toast. Even though it would be a dream come true, because of this, it would be irresponsible to go full time on the papers. To remedy this, we have been thinking about introducing a third revenue stream with sponsorships for a small amount of videos each month. The majority, 75% of the videos would remain Patreon supported, and the remaining 25% would be sponsored, just enough to enable me and my wife to do this full time in the future. There would be no other changes to the videos, Patreon supporters get all of them in early access, and as before, I choose the papers too. With this, if something happens to any of the revenue streams, we would be able to keep the series afloat without any delays. I would also have more time to every now and then fly out and inform key political decision makers on the state of AI so they can make better decisions for us. Being else would remain the same, the videos would arrive more often in time, and the dream could perhaps come true. I think transparency is of utmost importance and wanted to make sure to inform you before any change happens. I hope you are okay with this, and if you are a company and you are interested in sponsoring the series, let's talk. As always, thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.92, "text": " Dear Fellow Scholars, this is 2 Minute Papers with Karoijona Ifehir."}, {"start": 4.92, "end": 9.200000000000001, "text": " This video is not about a paper, but about the video series itself."}, {"start": 9.200000000000001, "end": 13.76, "text": " I don't do this often, but for the sake of transparency, I wanted to make sure to tell"}, {"start": 13.76, "end": 14.84, "text": " you about this."}, {"start": 14.84, "end": 19.88, "text": " We have recently hit 175,000 subscribers on the channel."}, {"start": 19.88, "end": 25.32, "text": " I find this number almost unfathomable, and please know that this became a possibility"}, {"start": 25.32, "end": 29.88, "text": " only because of you, so I would like to let you know how grateful I am for your support"}, {"start": 29.88, "end": 32.44, "text": " here on YouTube and Patreon."}, {"start": 32.44, "end": 37.0, "text": " As most of you know, I still work as a full-time researcher at the Technical University of"}, {"start": 37.0, "end": 42.56, "text": " Vienna, and I get the question, why not go full-time on 2 Minute Papers from you Fellow"}, {"start": 42.56, "end": 45.0, "text": " Scholars increasingly often?"}, {"start": 45.0, "end": 50.16, "text": " The answer is that over time, I would love to, but our current financial situation does"}, {"start": 50.16, "end": 51.519999999999996, "text": " not allow it."}, {"start": 51.519999999999996, "end": 52.879999999999995, "text": " Let me explain that."}, {"start": 52.879999999999995, "end": 59.2, "text": " First, I tried to run the channel solely on YouTube ads which led to a rude awakening."}, {"start": 59.2, "end": 64.36, "text": " Most people, myself included, are very surprised when they hear that the rates had become so"}, {"start": 64.36, "end": 70.0, "text": " low that around the first 1 million viewer mark, the series earned less than a dollar a"}, {"start": 70.0, "end": 71.0, "text": " day."}, {"start": 71.0, "end": 76.52000000000001, "text": " Then we introduced Patreon, and with your support, we are now able to buy proper equipment"}, {"start": 76.52000000000001, "end": 78.4, "text": " to make better videos for you."}, {"start": 78.4, "end": 84.0, "text": " I can, without hesitation, say that you are the reason this channel can exist."}, {"start": 84.0, "end": 85.4, "text": " Everything is better this way."}, {"start": 85.4, "end": 90.68, "text": " Now we have two independent revenue sources, Patreon being the most important, however,"}, {"start": 90.68, "end": 96.48, "text": " whenever YouTube or Patreon monetization issues arise, you see many other channels disappearing"}, {"start": 96.48, "end": 97.80000000000001, "text": " into the ether."}, {"start": 97.80000000000001, "end": 103.44, "text": " I am terrified of this, and I want to do everything I possibly can to make sure that this does"}, {"start": 103.44, "end": 107.88000000000001, "text": " not happen to us, and we can keep running the series for a long, long time."}, {"start": 107.88000000000001, "end": 114.36000000000001, "text": " However, if anything happens to any of these revenue streams, simply put, we are toast."}, {"start": 114.36, "end": 119.0, "text": " Even though it would be a dream come true, because of this, it would be irresponsible to"}, {"start": 119.0, "end": 121.2, "text": " go full time on the papers."}, {"start": 121.2, "end": 126.84, "text": " To remedy this, we have been thinking about introducing a third revenue stream with sponsorships"}, {"start": 126.84, "end": 129.4, "text": " for a small amount of videos each month."}, {"start": 129.4, "end": 135.84, "text": " The majority, 75% of the videos would remain Patreon supported, and the remaining 25% would"}, {"start": 135.84, "end": 141.68, "text": " be sponsored, just enough to enable me and my wife to do this full time in the future."}, {"start": 141.68, "end": 146.36, "text": " There would be no other changes to the videos, Patreon supporters get all of them in early"}, {"start": 146.36, "end": 150.04000000000002, "text": " access, and as before, I choose the papers too."}, {"start": 150.04000000000002, "end": 154.28, "text": " With this, if something happens to any of the revenue streams, we would be able to keep"}, {"start": 154.28, "end": 156.88, "text": " the series afloat without any delays."}, {"start": 156.88, "end": 162.36, "text": " I would also have more time to every now and then fly out and inform key political decision"}, {"start": 162.36, "end": 167.08, "text": " makers on the state of AI so they can make better decisions for us."}, {"start": 167.08, "end": 171.96, "text": " Being else would remain the same, the videos would arrive more often in time, and the dream"}, {"start": 171.96, "end": 173.68, "text": " could perhaps come true."}, {"start": 173.68, "end": 178.92000000000002, "text": " I think transparency is of utmost importance and wanted to make sure to inform you before"}, {"start": 178.92000000000002, "end": 180.44, "text": " any change happens."}, {"start": 180.44, "end": 185.24, "text": " I hope you are okay with this, and if you are a company and you are interested in sponsoring"}, {"start": 185.24, "end": 187.36, "text": " the series, let's talk."}, {"start": 187.36, "end": 197.36, "text": " As always, thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=duCQUu8EQVA
Real-Time Holography Simulation!
The paper "Acquiring Spatially Varying Appearance of Printed Holographic Surfaces" is available here: https://wp.doc.ic.ac.uk/rgi/project/acquiring-spatially-varying-appearance-of-printed-holographic-surfaces/ Material learning and synthesis algorithm: https://users.cg.tuwien.ac.at/zsolnai/gfx/gaussian-material-synthesis/ Neural Material Synthesis: https://www.youtube.com/watch?v=XpwW3glj2T8 Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: https://www.amazon.com/dp/B06ZZ23V2L EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-1357029/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. If we wish to populate a virtual world with photorealistic materials, the last few years have offered a number of amazing techniques to do so. We can obtain such a material from a flash and no flash photograph pair of a target material and having a neural network create a digital version of it, or remarkably even just one photograph is enough to perform this. This footage that you see here shows these materials after they have been rendered by a light simulation program. If we don't have physical access to these materials, we can also use a recent learning algorithm to learn our preferences and recommend new materials that we would enjoy. However, whenever I publish such a video, I always get comments asking, but what about the more advanced materials? And my answer is, you are right. Have a look at this piece of work, which is about acquiring printed holographic materials. This means that we have physical access to this holographic pattern, put a camera close by and measure data in a way that can be imported into a light simulation program to make a digital copy of it. This idea is much less farfetched than it sounds because we can find such materials in many everyday objects like banknotes, giftbags, clothing, or of course security holograms. However, it is also quite difficult. Look here. As you see, these holographic patterns are quite diverse, and the well-crafted algorithm would have to be able to capture this rotation effect, circular diffractive areas, firework effects, and even this iridescent glitter. That is quite a challenge. This paper proposes two novel techniques to approach this problem. The first one assumes that there is some sort of repetition in the visual structure of the hologram and takes that into consideration. As a result, it can give us high quality results by taking only one to five photographs of a target material. The second method is more exhaustive and needs more specialized hardware, but in return can deal with arbitrary structures and requires at least four photographs at all times. These are both quite remarkable. Just think about the fact that these materials look different from every viewing angle, and they also change over the surface of the object. And for the first technique, we don't need sophisticated instruments, only a consumer DSLR camera is required. The reconstructed digital materials can be used in real time, and what's more, we can also exert artistic control over the outputs by modifying the periodicities of the material. How cool is that? And if you are about to subscribe to the series, or you are already subscribed, make sure to click the bell icon, or otherwise you may miss future episodes. That would be a bummer because I have a lot more amazing papers to show you. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.5600000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.5600000000000005, "end": 9.56, "text": " If we wish to populate a virtual world with photorealistic materials, the last few years"}, {"start": 9.56, "end": 13.08, "text": " have offered a number of amazing techniques to do so."}, {"start": 13.08, "end": 18.84, "text": " We can obtain such a material from a flash and no flash photograph pair of a target material"}, {"start": 18.84, "end": 25.04, "text": " and having a neural network create a digital version of it, or remarkably even just one"}, {"start": 25.04, "end": 27.6, "text": " photograph is enough to perform this."}, {"start": 27.6, "end": 32.32, "text": " This footage that you see here shows these materials after they have been rendered by"}, {"start": 32.32, "end": 34.36, "text": " a light simulation program."}, {"start": 34.36, "end": 38.72, "text": " If we don't have physical access to these materials, we can also use a recent learning"}, {"start": 38.72, "end": 44.72, "text": " algorithm to learn our preferences and recommend new materials that we would enjoy."}, {"start": 44.72, "end": 50.16, "text": " However, whenever I publish such a video, I always get comments asking, but what about"}, {"start": 50.16, "end": 52.400000000000006, "text": " the more advanced materials?"}, {"start": 52.400000000000006, "end": 54.64, "text": " And my answer is, you are right."}, {"start": 54.64, "end": 60.120000000000005, "text": " Have a look at this piece of work, which is about acquiring printed holographic materials."}, {"start": 60.120000000000005, "end": 64.48, "text": " This means that we have physical access to this holographic pattern, put a camera close"}, {"start": 64.48, "end": 70.56, "text": " by and measure data in a way that can be imported into a light simulation program to make a"}, {"start": 70.56, "end": 72.28, "text": " digital copy of it."}, {"start": 72.28, "end": 77.24000000000001, "text": " This idea is much less farfetched than it sounds because we can find such materials in"}, {"start": 77.24, "end": 85.11999999999999, "text": " many everyday objects like banknotes, giftbags, clothing, or of course security holograms."}, {"start": 85.11999999999999, "end": 88.0, "text": " However, it is also quite difficult."}, {"start": 88.0, "end": 89.0, "text": " Look here."}, {"start": 89.0, "end": 93.75999999999999, "text": " As you see, these holographic patterns are quite diverse, and the well-crafted algorithm"}, {"start": 93.75999999999999, "end": 100.0, "text": " would have to be able to capture this rotation effect, circular diffractive areas, firework"}, {"start": 100.0, "end": 103.64, "text": " effects, and even this iridescent glitter."}, {"start": 103.64, "end": 105.52, "text": " That is quite a challenge."}, {"start": 105.52, "end": 109.28, "text": " This paper proposes two novel techniques to approach this problem."}, {"start": 109.28, "end": 113.88, "text": " The first one assumes that there is some sort of repetition in the visual structure of"}, {"start": 113.88, "end": 116.88, "text": " the hologram and takes that into consideration."}, {"start": 116.88, "end": 122.67999999999999, "text": " As a result, it can give us high quality results by taking only one to five photographs"}, {"start": 122.67999999999999, "end": 124.36, "text": " of a target material."}, {"start": 124.36, "end": 130.2, "text": " The second method is more exhaustive and needs more specialized hardware, but in return"}, {"start": 130.2, "end": 136.44, "text": " can deal with arbitrary structures and requires at least four photographs at all times."}, {"start": 136.44, "end": 138.83999999999997, "text": " These are both quite remarkable."}, {"start": 138.83999999999997, "end": 143.6, "text": " Just think about the fact that these materials look different from every viewing angle, and"}, {"start": 143.6, "end": 146.95999999999998, "text": " they also change over the surface of the object."}, {"start": 146.95999999999998, "end": 151.16, "text": " And for the first technique, we don't need sophisticated instruments, only a consumer"}, {"start": 151.16, "end": 153.92, "text": " DSLR camera is required."}, {"start": 153.92, "end": 158.56, "text": " The reconstructed digital materials can be used in real time, and what's more, we can"}, {"start": 158.56, "end": 165.0, "text": " also exert artistic control over the outputs by modifying the periodicities of the material."}, {"start": 165.0, "end": 167.08, "text": " How cool is that?"}, {"start": 167.08, "end": 171.88, "text": " And if you are about to subscribe to the series, or you are already subscribed, make sure"}, {"start": 171.88, "end": 176.16, "text": " to click the bell icon, or otherwise you may miss future episodes."}, {"start": 176.16, "end": 180.4, "text": " That would be a bummer because I have a lot more amazing papers to show you."}, {"start": 180.4, "end": 189.28, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=F-00NhYUnH4
This AI Learned How To Generate Human Appearance
Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers The paper "A Variational U-Net for Conditional Appearance and Shape Generation" is available here: https://compvis.github.io/vunet/ Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh › Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Earlier NVIDIA episode: https://www.youtube.com/watch?v=VrgYtFhVGmg We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/en/boy-ski-skiing-cold-goggles-kid-1835416/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. In this series, we often discuss that neural networks are extraordinarily useful for classification tasks. This means that if we give them an image, they can tell us what's on it, which is great for self-driving cars, image search, and a variety of other applications. However, fewer people know that they can also be used for image generation. We've seen many great examples of this, where NVIDIA's AI was able to dream up high-resolution images of imaginary celebrities. This was done using a generative adversarial network, an architecture where two neural networks battle each other. However, these methods don't work too well if we have too much variation in our datasets. For instance, they are great for faces, but not for synthesizing the entire human body. This particular technique uses a different architecture, and as a result, can synthesize an entire human body and is also able to synthesize both shape and appearance. You will see in a moment that because of that, it can do magical things. For instance, in this example, all we have is one low-quality image of a test subject as an input, and we can give it a photo of a different person. What happens now is that the algorithm runs pose estimation on this input and transforms our test subject into that pose. The crazy thing about this is that it even creates views for new angles we didn't even have access to. In this other experiment, we have one image on the left. What we can do here is that we specify not a person, but draw a pose directly indicating that we wish to see our test subject in this pose, and the algorithm is also able to create an appropriate new image. And again, it works for angles that require information that we don't have access to. These new angles show that the technique understands the concept of shorts or trousers, although it seems to forget to put on socks sometimes. Truth be told, I don't blame it. What is even cooler is that it seems to behave very similarly for a variety of different inputs. This is non-trivial, as this property doesn't just emerge out of thin air and will be a great selling point for this new method. It also supports a feature where we need to give a crew drawing to the algorithm, and it will transform it into a photorealistic image. However, it is clear that there are many ways to feel destroying with information, so how do we tell the algorithm what appearance we are looking for? Well, worry not, because this technique can also perform appearance transfer. This means that we can exert artistic control over the output by providing a photo of a different object and it will transfer the style of this photo to our input. No artistic skills needed, but good taste is as much of a necessity as ever. Yet another AI that will empower both experts and novice users alike. And while we are enjoying these amazing results, or even better, if you have already built up an addiction for the papers, you can keep it in check by supporting us on Patreon and in return getting access to these videos earlier. You can find us through patreon.com slash 2 Minute Papers. There is a link to it in the video description and as always to the paper as well. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.84, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.84, "end": 10.8, "text": " In this series, we often discuss that neural networks are extraordinarily useful for classification"}, {"start": 10.8, "end": 11.8, "text": " tasks."}, {"start": 11.8, "end": 16.44, "text": " This means that if we give them an image, they can tell us what's on it, which is great"}, {"start": 16.44, "end": 21.240000000000002, "text": " for self-driving cars, image search, and a variety of other applications."}, {"start": 21.240000000000002, "end": 26.240000000000002, "text": " However, fewer people know that they can also be used for image generation."}, {"start": 26.24, "end": 32.739999999999995, "text": " We've seen many great examples of this, where NVIDIA's AI was able to dream up high-resolution"}, {"start": 32.739999999999995, "end": 35.599999999999994, "text": " images of imaginary celebrities."}, {"start": 35.599999999999994, "end": 41.32, "text": " This was done using a generative adversarial network, an architecture where two neural networks"}, {"start": 41.32, "end": 42.519999999999996, "text": " battle each other."}, {"start": 42.519999999999996, "end": 48.76, "text": " However, these methods don't work too well if we have too much variation in our datasets."}, {"start": 48.76, "end": 54.56, "text": " For instance, they are great for faces, but not for synthesizing the entire human body."}, {"start": 54.56, "end": 59.52, "text": " This particular technique uses a different architecture, and as a result, can synthesize"}, {"start": 59.52, "end": 66.36, "text": " an entire human body and is also able to synthesize both shape and appearance."}, {"start": 66.36, "end": 71.0, "text": " You will see in a moment that because of that, it can do magical things."}, {"start": 71.0, "end": 76.28, "text": " For instance, in this example, all we have is one low-quality image of a test subject"}, {"start": 76.28, "end": 80.28, "text": " as an input, and we can give it a photo of a different person."}, {"start": 80.28, "end": 85.92, "text": " What happens now is that the algorithm runs pose estimation on this input and transforms"}, {"start": 85.92, "end": 88.4, "text": " our test subject into that pose."}, {"start": 88.4, "end": 93.28, "text": " The crazy thing about this is that it even creates views for new angles we didn't even"}, {"start": 93.28, "end": 96.4, "text": " have access to."}, {"start": 96.4, "end": 99.88, "text": " In this other experiment, we have one image on the left."}, {"start": 99.88, "end": 105.76, "text": " What we can do here is that we specify not a person, but draw a pose directly indicating"}, {"start": 105.76, "end": 111.32000000000001, "text": " that we wish to see our test subject in this pose, and the algorithm is also able to create"}, {"start": 111.32000000000001, "end": 113.28, "text": " an appropriate new image."}, {"start": 113.28, "end": 118.64, "text": " And again, it works for angles that require information that we don't have access to."}, {"start": 118.64, "end": 124.28, "text": " These new angles show that the technique understands the concept of shorts or trousers,"}, {"start": 124.28, "end": 127.52000000000001, "text": " although it seems to forget to put on socks sometimes."}, {"start": 127.52000000000001, "end": 130.32, "text": " Truth be told, I don't blame it."}, {"start": 130.32, "end": 135.6, "text": " What is even cooler is that it seems to behave very similarly for a variety of different"}, {"start": 135.6, "end": 136.76, "text": " inputs."}, {"start": 136.76, "end": 141.79999999999998, "text": " This is non-trivial, as this property doesn't just emerge out of thin air and will be a"}, {"start": 141.79999999999998, "end": 145.0, "text": " great selling point for this new method."}, {"start": 145.0, "end": 149.6, "text": " It also supports a feature where we need to give a crew drawing to the algorithm, and it"}, {"start": 149.6, "end": 152.6, "text": " will transform it into a photorealistic image."}, {"start": 152.6, "end": 157.72, "text": " However, it is clear that there are many ways to feel destroying with information, so"}, {"start": 157.72, "end": 161.56, "text": " how do we tell the algorithm what appearance we are looking for?"}, {"start": 161.56, "end": 166.88, "text": " Well, worry not, because this technique can also perform appearance transfer."}, {"start": 166.88, "end": 172.4, "text": " This means that we can exert artistic control over the output by providing a photo of a different"}, {"start": 172.4, "end": 177.52, "text": " object and it will transfer the style of this photo to our input."}, {"start": 177.52, "end": 182.92000000000002, "text": " No artistic skills needed, but good taste is as much of a necessity as ever."}, {"start": 182.92000000000002, "end": 187.84, "text": " Yet another AI that will empower both experts and novice users alike."}, {"start": 187.84, "end": 192.64000000000001, "text": " And while we are enjoying these amazing results, or even better, if you have already built up"}, {"start": 192.64000000000001, "end": 197.76, "text": " an addiction for the papers, you can keep it in check by supporting us on Patreon and"}, {"start": 197.76, "end": 201.36, "text": " in return getting access to these videos earlier."}, {"start": 201.36, "end": 205.84, "text": " You can find us through patreon.com slash 2 Minute Papers."}, {"start": 205.84, "end": 209.92000000000002, "text": " There is a link to it in the video description and as always to the paper as well."}, {"start": 209.92, "end": 219.92, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=Bv3yat484aQ
Multilayer Light Simulations: More Beautiful Images, Faster
The paper "Position-Free Monte Carlo Simulation for Arbitrary Layered BSDFs" is available here: https://shuangz.com/projects/layered-sa18/ My Rendering course at the TU Wien: https://www.youtube.com/playlist?list=PLujxSBD-JXgnGmsn7gEyN28P1DnRZG7qi More about the talk: › Conference - https://ec.europa.eu/epsc/events/election-interference-digital-age-building-resilience-cyber-enabled-threats_en › My think piece - https://medium.com/election-interference-in-the-digital-age Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh › Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir. Today, we are going to talk about the craft of simulating rays of light to create beautiful images, just like the ones you see here. And when I say simulating rays of light, I mean not a few, but millions and millions of light rays need to be computed alongside with how they get absorbed or scattered off of our objects in a virtual scene. Initially, we start out with a really noisy image, and as we add more rays, the image gets clearer and clearer over time. The time it takes for these images to clean up depends on the complexity of the geometry and our material models, and one thing is for sure, rendering materials that have multiple layers is a nightmare. This paper introduces an amazing new multi-layer material model to address that. Here, you see an example where we are able to stack together transparent and translucent layers to synthesize a really lifelike scratched metal material with water droplets. Also, have a look at these gorgeous materials, and note that these are all virtual materials that are simulated using physics and computer graphics. Isn't this incredible? However, some of you fellow scholars remember that we talked about multi-layered materials before. So, what's new here? This new method supports more advanced material models that previous techniques were either unable to simulate or took too long to do so. But that's not all. Have a look here. What you see is an equal time comparison, which means that if we run the new technique against the older methods for the same amount of time, it is easy to see that we will have much less noise in our output image. This means that the images clear up quicker and we can produce them in less time. It also supports my favorite, multiple important sampling, an aggressive noise reduction technique by Eric Veech, which is arguably one of the greatest inventions ever in light transport research. This ensures that for more difficult scenes, the images clean up much, much faster and has a beautiful and simple mathematical formulation. Super happy to see that it also earned him a technical Oscar award a few years ago. If you are enjoying learning about light transport, make sure to check out my course on this topic at the Technical University of Vienna. I still teach this at the university for 20 master students at a time and thought that the teachings shouldn't only be available for a lucky few people who can afford a college education. Clearly, the teaching should be available for everyone, so we recorded it and put it online and now everyone can watch it free of charge. I was quite stunned to see that more than 10,000 people decided to start it, so make sure to give it a go if you're interested. And just one more thing, as you are listening to this episode, I am holding a talk at the EU's Political Strategy Center. And the objective of this talk is to inform political decision makers about the state of the art in AI so they can make more informed decisions for us. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is Two Minute Papers with Karo Zsolnai-Fehir."}, {"start": 4.4, "end": 9.4, "text": " Today, we are going to talk about the craft of simulating rays of light to create beautiful"}, {"start": 9.4, "end": 11.92, "text": " images, just like the ones you see here."}, {"start": 11.92, "end": 16.88, "text": " And when I say simulating rays of light, I mean not a few, but millions and millions of"}, {"start": 16.88, "end": 21.96, "text": " light rays need to be computed alongside with how they get absorbed or scattered off of"}, {"start": 21.96, "end": 24.2, "text": " our objects in a virtual scene."}, {"start": 24.2, "end": 29.400000000000002, "text": " Initially, we start out with a really noisy image, and as we add more rays, the image gets"}, {"start": 29.4, "end": 31.72, "text": " clearer and clearer over time."}, {"start": 31.72, "end": 36.32, "text": " The time it takes for these images to clean up depends on the complexity of the geometry"}, {"start": 36.32, "end": 41.8, "text": " and our material models, and one thing is for sure, rendering materials that have multiple"}, {"start": 41.8, "end": 43.8, "text": " layers is a nightmare."}, {"start": 43.8, "end": 48.480000000000004, "text": " This paper introduces an amazing new multi-layer material model to address that."}, {"start": 48.480000000000004, "end": 53.84, "text": " Here, you see an example where we are able to stack together transparent and translucent"}, {"start": 53.84, "end": 59.64, "text": " layers to synthesize a really lifelike scratched metal material with water droplets."}, {"start": 59.64, "end": 65.28, "text": " Also, have a look at these gorgeous materials, and note that these are all virtual materials"}, {"start": 65.28, "end": 68.84, "text": " that are simulated using physics and computer graphics."}, {"start": 68.84, "end": 70.60000000000001, "text": " Isn't this incredible?"}, {"start": 70.60000000000001, "end": 75.44, "text": " However, some of you fellow scholars remember that we talked about multi-layered materials"}, {"start": 75.44, "end": 76.44, "text": " before."}, {"start": 76.44, "end": 78.68, "text": " So, what's new here?"}, {"start": 78.68, "end": 83.52000000000001, "text": " This new method supports more advanced material models that previous techniques were either"}, {"start": 83.52, "end": 87.47999999999999, "text": " unable to simulate or took too long to do so."}, {"start": 87.47999999999999, "end": 88.64, "text": " But that's not all."}, {"start": 88.64, "end": 89.84, "text": " Have a look here."}, {"start": 89.84, "end": 94.64, "text": " What you see is an equal time comparison, which means that if we run the new technique"}, {"start": 94.64, "end": 99.67999999999999, "text": " against the older methods for the same amount of time, it is easy to see that we will have"}, {"start": 99.67999999999999, "end": 102.36, "text": " much less noise in our output image."}, {"start": 102.36, "end": 107.16, "text": " This means that the images clear up quicker and we can produce them in less time."}, {"start": 107.16, "end": 112.67999999999999, "text": " It also supports my favorite, multiple important sampling, an aggressive noise reduction technique"}, {"start": 112.68, "end": 117.76, "text": " by Eric Veech, which is arguably one of the greatest inventions ever in light transport"}, {"start": 117.76, "end": 118.76, "text": " research."}, {"start": 118.76, "end": 124.24000000000001, "text": " This ensures that for more difficult scenes, the images clean up much, much faster and"}, {"start": 124.24000000000001, "end": 127.84, "text": " has a beautiful and simple mathematical formulation."}, {"start": 127.84, "end": 132.28, "text": " Super happy to see that it also earned him a technical Oscar award a few years ago."}, {"start": 132.28, "end": 136.68, "text": " If you are enjoying learning about light transport, make sure to check out my course on this topic"}, {"start": 136.68, "end": 138.68, "text": " at the Technical University of Vienna."}, {"start": 138.68, "end": 143.92000000000002, "text": " I still teach this at the university for 20 master students at a time and thought that"}, {"start": 143.92000000000002, "end": 149.08, "text": " the teachings shouldn't only be available for a lucky few people who can afford a college"}, {"start": 149.08, "end": 150.08, "text": " education."}, {"start": 150.08, "end": 154.72, "text": " Clearly, the teaching should be available for everyone, so we recorded it and put it"}, {"start": 154.72, "end": 158.68, "text": " online and now everyone can watch it free of charge."}, {"start": 158.68, "end": 163.84, "text": " I was quite stunned to see that more than 10,000 people decided to start it, so make sure"}, {"start": 163.84, "end": 166.04000000000002, "text": " to give it a go if you're interested."}, {"start": 166.04, "end": 170.56, "text": " And just one more thing, as you are listening to this episode, I am holding a talk at the"}, {"start": 170.56, "end": 172.95999999999998, "text": " EU's Political Strategy Center."}, {"start": 172.95999999999998, "end": 177.28, "text": " And the objective of this talk is to inform political decision makers about the state of"}, {"start": 177.28, "end": 180.88, "text": " the art in AI so they can make more informed decisions for us."}, {"start": 180.88, "end": 208.32, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=6IsIGp1IezE
Brain-to-Brain Communication is Coming!
The paper "BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains" is available here: https://arxiv.org/abs/1809.08632 Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Two Minute Papers Merch: US: https://www.amazon.com/dp/B06ZZ23V2L EU/Worldwide: https://shop.spreadshirt.net/TwoMinutePapers/ Thumbnail background image credit: https://pixabay.com/photo-2213009/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/ #brainnet #neuralink
Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir. This is an episode that doesn't have the usual visual fireworks and is expected to get fewer clicks, but it is an important story to tell and because of your support, we are able to cover a paper like this. So now, get this. This is a non-invasive brain-to-brain interface that uses EEG to record brain signals and TMS to deliver information to the brain. The non-invasive part is quite important, it basically means that we don't need to drill a hole in the head of the patients. That's a good idea. This image shows three humans connected via computers, two senders and one receiver. The senders provide information to the receiver about something he would otherwise not know about and we measure if they are able to collaboratively solve a problem together. These people never met each other and don't even know each other and they can collaborate through this technique directly via brain signals. Wow! The BCI means brain-computer interface and the CBI as you guessed, the computer-brain interface. So these brain signals can be encoded and decoded and freely transferred between people and computers. Insanity. After gathering all this information, the receiver makes a decision which the senders also have access to and can transmit some more information if necessary. So what do they use it for? Of course, to play Tetris. Jokes aside, this is a great experiment where the goal is to clear a line. Simple enough, right? Not so much because there is a twist. The receiver only sees what you see here on the left side. This is the current piece we have to place on the field but the receiver has no idea how to rotate it because he doesn't see its surroundings. But the senders do so they transmit the appropriate information to the receiver who will now be able to make an informed decision as to how to rotate this piece correctly to clear a line. So does it work? The experiment is designed in a way that there is a 50% chance to be right without any additional information for the receiver, so this will be the baseline result. And the results are between 75 and 85% which means that the interface is working and brain-to-brain collaboration is now a reality. I am out of words. The paper also talks about brain-to-brain social networks and all kinds of science fiction like that. My head is about to explode with the possibilities, who knows? Maybe in a few years we can make a super intelligent brain that combines all of our expertise and does research for all of us. Or writes two minute paper's episodes. This paper is a must read. Do you have any other ideas as to how this could enhance our lives? Let me know in the comments section. Thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.5200000000000005, "text": " Dear Fellow Scholars, this is two-minute papers with Karojona Ifehir."}, {"start": 4.5200000000000005, "end": 9.28, "text": " This is an episode that doesn't have the usual visual fireworks and is expected to get fewer"}, {"start": 9.28, "end": 14.32, "text": " clicks, but it is an important story to tell and because of your support, we are able to"}, {"start": 14.32, "end": 16.12, "text": " cover a paper like this."}, {"start": 16.12, "end": 17.84, "text": " So now, get this."}, {"start": 17.84, "end": 24.14, "text": " This is a non-invasive brain-to-brain interface that uses EEG to record brain signals and"}, {"start": 24.14, "end": 27.04, "text": " TMS to deliver information to the brain."}, {"start": 27.04, "end": 31.08, "text": " The non-invasive part is quite important, it basically means that we don't need to"}, {"start": 31.08, "end": 33.4, "text": " drill a hole in the head of the patients."}, {"start": 33.4, "end": 34.64, "text": " That's a good idea."}, {"start": 34.64, "end": 40.36, "text": " This image shows three humans connected via computers, two senders and one receiver."}, {"start": 40.36, "end": 45.2, "text": " The senders provide information to the receiver about something he would otherwise not know"}, {"start": 45.2, "end": 50.8, "text": " about and we measure if they are able to collaboratively solve a problem together."}, {"start": 50.8, "end": 55.6, "text": " These people never met each other and don't even know each other and they can collaborate"}, {"start": 55.6, "end": 59.44, "text": " through this technique directly via brain signals."}, {"start": 59.44, "end": 60.68, "text": " Wow!"}, {"start": 60.68, "end": 66.36, "text": " The BCI means brain-computer interface and the CBI as you guessed, the computer-brain"}, {"start": 66.36, "end": 67.36, "text": " interface."}, {"start": 67.36, "end": 73.04, "text": " So these brain signals can be encoded and decoded and freely transferred between people and"}, {"start": 73.04, "end": 74.52000000000001, "text": " computers."}, {"start": 74.52000000000001, "end": 75.8, "text": " Insanity."}, {"start": 75.8, "end": 80.08, "text": " After gathering all this information, the receiver makes a decision which the senders"}, {"start": 80.08, "end": 85.0, "text": " also have access to and can transmit some more information if necessary."}, {"start": 85.0, "end": 87.28, "text": " So what do they use it for?"}, {"start": 87.28, "end": 89.52, "text": " Of course, to play Tetris."}, {"start": 89.52, "end": 94.32, "text": " Jokes aside, this is a great experiment where the goal is to clear a line."}, {"start": 94.32, "end": 96.0, "text": " Simple enough, right?"}, {"start": 96.0, "end": 98.12, "text": " Not so much because there is a twist."}, {"start": 98.12, "end": 101.96000000000001, "text": " The receiver only sees what you see here on the left side."}, {"start": 101.96000000000001, "end": 106.6, "text": " This is the current piece we have to place on the field but the receiver has no idea how"}, {"start": 106.6, "end": 110.12, "text": " to rotate it because he doesn't see its surroundings."}, {"start": 110.12, "end": 114.8, "text": " But the senders do so they transmit the appropriate information to the receiver who will"}, {"start": 114.8, "end": 119.75999999999999, "text": " now be able to make an informed decision as to how to rotate this piece correctly to"}, {"start": 119.75999999999999, "end": 122.75999999999999, "text": " clear a line."}, {"start": 122.75999999999999, "end": 124.64, "text": " So does it work?"}, {"start": 124.64, "end": 130.0, "text": " The experiment is designed in a way that there is a 50% chance to be right without any additional"}, {"start": 130.0, "end": 135.07999999999998, "text": " information for the receiver, so this will be the baseline result."}, {"start": 135.07999999999998, "end": 141.84, "text": " And the results are between 75 and 85% which means that the interface is working and"}, {"start": 141.84, "end": 145.44, "text": " brain-to-brain collaboration is now a reality."}, {"start": 145.44, "end": 147.44, "text": " I am out of words."}, {"start": 147.44, "end": 152.52, "text": " The paper also talks about brain-to-brain social networks and all kinds of science fiction"}, {"start": 152.52, "end": 153.52, "text": " like that."}, {"start": 153.52, "end": 157.56, "text": " My head is about to explode with the possibilities, who knows?"}, {"start": 157.56, "end": 163.28, "text": " Maybe in a few years we can make a super intelligent brain that combines all of our expertise"}, {"start": 163.28, "end": 165.88, "text": " and does research for all of us."}, {"start": 165.88, "end": 168.48000000000002, "text": " Or writes two minute paper's episodes."}, {"start": 168.48000000000002, "end": 170.36, "text": " This paper is a must read."}, {"start": 170.36, "end": 174.04000000000002, "text": " Do you have any other ideas as to how this could enhance our lives?"}, {"start": 174.04000000000002, "end": 175.68, "text": " Let me know in the comments section."}, {"start": 175.68, "end": 203.32, "text": " Thanks for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=kBFMsY5ZP0o
This AI Senses Humans Through Walls 👀
Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh › Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg The paper "Through-Wall Human Pose Estimation Using Radio Signals " is available here: http://rfpose.csail.mit.edu/ We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Thumbnail background image credit: https://pixabay.com/en/texture-wall-gray-wall-texture-1033755/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Károly Zsolnai-Fehér. Pose estimation is an interesting area of research where we typically have a few images or video footage of humans and we automatically try to extract the pose a person was taking. In short, the input is one or more photo and the output is typically a skeleton of the person. So what is this good for? A lot of things. For instance, we can use these skeletons to cheaply transfer the gestures of a human onto a virtual character, full detection for the elderly, analyzing the motion of athletes and many, many others. This work showcases a neural network that measures how the Wi-Fi radio signals bounce around in the room and reflect off of the human body and from these murky waves it estimates where we are. Not only that, but it is also accurate enough to tell us our pose. As you see here, as the Wi-Fi signal also traverses in the dark, this pose estimation works really well in poor lighting conditions. That is a remarkable feat. But now, hold on to your papers because that's nothing compared to what you are about to see now. Have a look here. We know that Wi-Fi signals go through walls. So perhaps this means that that can't be true, right? It tracks the pose of this human as he enters the room and now, as he disappears, look, the algorithm still knows where he is. That's right, this means that it can also detect our pose through walls. What kind of wizardry is that? Now, note that this technique doesn't look at the video feed we are now looking at. It is there for us for visual reference. It is also quite remarkable that the signal being sent out is a thousand times weaker than an actual Wi-Fi signal and it can also detect multiple humans. This is not much of a problem with color images because we can clearly see everyone in an image, but the radio signals are much more difficult to read when they reflect off of multiple bodies in the scene. The whole technique works through using a teacher's student network structure. The teacher is a standard pose estimation neural network that looks at a color image and predicts the pose of the humans therein. So far, so good, nothing new here. However, there is a student network that looks at the correct decisions of the teacher but has the radio signal as an input instead. As a result, it will learn what the different radio signal distributions mean and how they relate to human positions and poses. As the name says, the teacher shows the student neural network the correct results and the student learns how to produce them from radio signals instead of images. If anyone said that they were working on this problem 10 years ago, they would have likely ended up in an asylum. Today, it is reality. What a time to be alive. Also, if you enjoyed this episode, please consider supporting the show at patreon.com slash two minute papers. You can pick up really cool perks like getting your name shown as a key supporter in the video and more. Because of your support, we are able to create all of these videos smooth and creamy in 4k resolution and 60 frames per second and with close captions. And we are currently saving up for a new video editing rig to make better videos for you. We also support one-time payments through PayPal and the usual cryptocurrencies. More details about all of these are available in the video description and as always, thanks for watching and for your generous support and I'll see you next time.
[{"start": 0.0, "end": 4.16, "text": " Dear Fellow Scholars, this is two-minute papers with K\u00e1roly Zsolnai-Feh\u00e9r."}, {"start": 4.16, "end": 9.88, "text": " Pose estimation is an interesting area of research where we typically have a few images or video"}, {"start": 9.88, "end": 15.68, "text": " footage of humans and we automatically try to extract the pose a person was taking."}, {"start": 15.68, "end": 21.400000000000002, "text": " In short, the input is one or more photo and the output is typically a skeleton of the"}, {"start": 21.400000000000002, "end": 22.400000000000002, "text": " person."}, {"start": 22.400000000000002, "end": 24.400000000000002, "text": " So what is this good for?"}, {"start": 24.400000000000002, "end": 25.8, "text": " A lot of things."}, {"start": 25.8, "end": 30.76, "text": " For instance, we can use these skeletons to cheaply transfer the gestures of a human"}, {"start": 30.76, "end": 36.96, "text": " onto a virtual character, full detection for the elderly, analyzing the motion of athletes"}, {"start": 36.96, "end": 39.2, "text": " and many, many others."}, {"start": 39.2, "end": 45.08, "text": " This work showcases a neural network that measures how the Wi-Fi radio signals bounce around"}, {"start": 45.08, "end": 50.88, "text": " in the room and reflect off of the human body and from these murky waves it estimates where"}, {"start": 50.88, "end": 51.88, "text": " we are."}, {"start": 51.88, "end": 56.400000000000006, "text": " Not only that, but it is also accurate enough to tell us our pose."}, {"start": 56.400000000000006, "end": 61.480000000000004, "text": " As you see here, as the Wi-Fi signal also traverses in the dark, this pose estimation works"}, {"start": 61.480000000000004, "end": 64.4, "text": " really well in poor lighting conditions."}, {"start": 64.4, "end": 66.4, "text": " That is a remarkable feat."}, {"start": 66.4, "end": 71.0, "text": " But now, hold on to your papers because that's nothing compared to what you are about to"}, {"start": 71.0, "end": 72.24000000000001, "text": " see now."}, {"start": 72.24000000000001, "end": 73.24000000000001, "text": " Have a look here."}, {"start": 73.24000000000001, "end": 76.44, "text": " We know that Wi-Fi signals go through walls."}, {"start": 76.44, "end": 81.64, "text": " So perhaps this means that that can't be true, right?"}, {"start": 81.64, "end": 87.88, "text": " It tracks the pose of this human as he enters the room and now, as he disappears, look,"}, {"start": 87.88, "end": 90.56, "text": " the algorithm still knows where he is."}, {"start": 90.56, "end": 95.48, "text": " That's right, this means that it can also detect our pose through walls."}, {"start": 95.48, "end": 97.72, "text": " What kind of wizardry is that?"}, {"start": 97.72, "end": 102.2, "text": " Now, note that this technique doesn't look at the video feed we are now looking at."}, {"start": 102.2, "end": 104.48, "text": " It is there for us for visual reference."}, {"start": 104.48, "end": 109.76, "text": " It is also quite remarkable that the signal being sent out is a thousand times weaker than"}, {"start": 109.76, "end": 114.28, "text": " an actual Wi-Fi signal and it can also detect multiple humans."}, {"start": 114.28, "end": 118.76, "text": " This is not much of a problem with color images because we can clearly see everyone in an"}, {"start": 118.76, "end": 123.52000000000001, "text": " image, but the radio signals are much more difficult to read when they reflect off of"}, {"start": 123.52000000000001, "end": 125.56, "text": " multiple bodies in the scene."}, {"start": 125.56, "end": 129.84, "text": " The whole technique works through using a teacher's student network structure."}, {"start": 129.84, "end": 134.72, "text": " The teacher is a standard pose estimation neural network that looks at a color image and"}, {"start": 134.72, "end": 137.48000000000002, "text": " predicts the pose of the humans therein."}, {"start": 137.48, "end": 140.28, "text": " So far, so good, nothing new here."}, {"start": 140.28, "end": 145.07999999999998, "text": " However, there is a student network that looks at the correct decisions of the teacher but"}, {"start": 145.07999999999998, "end": 148.32, "text": " has the radio signal as an input instead."}, {"start": 148.32, "end": 153.39999999999998, "text": " As a result, it will learn what the different radio signal distributions mean and how they"}, {"start": 153.39999999999998, "end": 156.35999999999999, "text": " relate to human positions and poses."}, {"start": 156.35999999999999, "end": 161.79999999999998, "text": " As the name says, the teacher shows the student neural network the correct results and the student"}, {"start": 161.79999999999998, "end": 166.39999999999998, "text": " learns how to produce them from radio signals instead of images."}, {"start": 166.4, "end": 170.92000000000002, "text": " If anyone said that they were working on this problem 10 years ago, they would have likely"}, {"start": 170.92000000000002, "end": 172.68, "text": " ended up in an asylum."}, {"start": 172.68, "end": 174.96, "text": " Today, it is reality."}, {"start": 174.96, "end": 176.52, "text": " What a time to be alive."}, {"start": 176.52, "end": 182.24, "text": " Also, if you enjoyed this episode, please consider supporting the show at patreon.com slash"}, {"start": 182.24, "end": 183.64000000000001, "text": " two minute papers."}, {"start": 183.64000000000001, "end": 187.96, "text": " You can pick up really cool perks like getting your name shown as a key supporter in the"}, {"start": 187.96, "end": 189.68, "text": " video and more."}, {"start": 189.68, "end": 194.84, "text": " Because of your support, we are able to create all of these videos smooth and creamy in"}, {"start": 194.84, "end": 200.20000000000002, "text": " 4k resolution and 60 frames per second and with close captions."}, {"start": 200.20000000000002, "end": 204.68, "text": " And we are currently saving up for a new video editing rig to make better videos for you."}, {"start": 204.68, "end": 209.2, "text": " We also support one-time payments through PayPal and the usual cryptocurrencies."}, {"start": 209.2, "end": 214.0, "text": " More details about all of these are available in the video description and as always, thanks"}, {"start": 214.0, "end": 241.12, "text": " for watching and for your generous support and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=txHQoYKaSUk
This Robot Learned To Clean Up Clutter
The paper "Learning Synergies between Pushing and Grasping with Self-supervised Deep Reinforcement Learning" is available here: http://vpg.cs.princeton.edu/ Pick up cool perks on our Patreon page: › https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh › Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Thumbnail background image credit: https://pixabay.com/photo-3198094/ Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir. This robot was tasked to clean up this table. Normally, anyone who watches this series knows that would be no big deal for any modern learning algorithm. Just grab it, right? Well, not in this case, because Reason Number 1, several objects are tightly packed together and Reason Number 2, they are too wide to hold with the fingers. But this means that the robot needs to figure out a series of additional actions to push the other pieces around and finally grab the correct one. Look, it found out that sometimes pushing helps grasping by making space for the fingers to grab these objects. This is a bit like the Roomba vacuum cleaner robot, but even better for clatter. Really cool. This robot arm works the following way. It has an RGBD camera, which endows it with the ability to see both color and depth. Now that we have this image, we have not one, but two neural networks looking at it. One is used to predict the utility of pushing at different possible locations and one for grasping. Finally, a decision is made as to which motion would lead to the biggest improvement in the state of the table. So, what about the training process? As you see, the speed of this robot arm is limited and we may have to wait for a long time for it to learn anything useful and not just flail around destroying other nearby objects. The solution includes my favorite part, training the robot within a simulated environment where these commands can be executed within milliseconds, speeding up the training process significantly. Our hope is always that the principles learned within the simulation applies to reality. Checkmark The simulation is also very useful to make comparisons with other state of the art algorithms easier. And, do you know what the bane of many many learning algorithms is? Generalization This means that if the technique was designed well, it can be trained on map looking, wooden blocks, and it will do well when it encounters new objects that are vastly different in shape and appearance. And as you see on the right, remarkably, this is exactly the case. Checkmark This makes us one step closer to learning algorithms that can see the world around us, interpret it, and make proper decisions to carry out the plan. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.4, "text": " Dear Fellow Scholars, this is two-minute papers with Kato Ejona Ifehir."}, {"start": 4.4, "end": 7.4, "text": " This robot was tasked to clean up this table."}, {"start": 7.4, "end": 12.46, "text": " Normally, anyone who watches this series knows that would be no big deal for any modern"}, {"start": 12.46, "end": 14.120000000000001, "text": " learning algorithm."}, {"start": 14.120000000000001, "end": 15.6, "text": " Just grab it, right?"}, {"start": 15.6, "end": 21.88, "text": " Well, not in this case, because Reason Number 1, several objects are tightly packed together"}, {"start": 21.88, "end": 25.900000000000002, "text": " and Reason Number 2, they are too wide to hold with the fingers."}, {"start": 25.9, "end": 31.259999999999998, "text": " But this means that the robot needs to figure out a series of additional actions to push"}, {"start": 31.259999999999998, "end": 35.3, "text": " the other pieces around and finally grab the correct one."}, {"start": 35.3, "end": 41.5, "text": " Look, it found out that sometimes pushing helps grasping by making space for the fingers"}, {"start": 41.5, "end": 43.019999999999996, "text": " to grab these objects."}, {"start": 43.019999999999996, "end": 48.379999999999995, "text": " This is a bit like the Roomba vacuum cleaner robot, but even better for clatter."}, {"start": 48.379999999999995, "end": 49.379999999999995, "text": " Really cool."}, {"start": 49.379999999999995, "end": 51.86, "text": " This robot arm works the following way."}, {"start": 51.86, "end": 58.5, "text": " It has an RGBD camera, which endows it with the ability to see both color and depth."}, {"start": 58.5, "end": 63.42, "text": " Now that we have this image, we have not one, but two neural networks looking at it."}, {"start": 63.42, "end": 68.82, "text": " One is used to predict the utility of pushing at different possible locations and one for"}, {"start": 68.82, "end": 69.82, "text": " grasping."}, {"start": 69.82, "end": 74.86, "text": " Finally, a decision is made as to which motion would lead to the biggest improvement in the"}, {"start": 74.86, "end": 76.46000000000001, "text": " state of the table."}, {"start": 76.46000000000001, "end": 79.14, "text": " So, what about the training process?"}, {"start": 79.14, "end": 83.82, "text": " As you see, the speed of this robot arm is limited and we may have to wait for a long"}, {"start": 83.82, "end": 89.02, "text": " time for it to learn anything useful and not just flail around destroying other nearby"}, {"start": 89.02, "end": 90.02, "text": " objects."}, {"start": 90.02, "end": 95.5, "text": " The solution includes my favorite part, training the robot within a simulated environment"}, {"start": 95.5, "end": 100.5, "text": " where these commands can be executed within milliseconds, speeding up the training process"}, {"start": 100.5, "end": 101.9, "text": " significantly."}, {"start": 101.9, "end": 107.86, "text": " Our hope is always that the principles learned within the simulation applies to reality."}, {"start": 107.86, "end": 108.86, "text": " Checkmark"}, {"start": 108.86, "end": 113.5, "text": " The simulation is also very useful to make comparisons with other state of the art algorithms"}, {"start": 113.5, "end": 114.5, "text": " easier."}, {"start": 114.5, "end": 119.22, "text": " And, do you know what the bane of many many learning algorithms is?"}, {"start": 119.22, "end": 120.22, "text": " Generalization"}, {"start": 120.22, "end": 125.82, "text": " This means that if the technique was designed well, it can be trained on map looking, wooden"}, {"start": 125.82, "end": 131.5, "text": " blocks, and it will do well when it encounters new objects that are vastly different in shape"}, {"start": 131.5, "end": 132.82, "text": " and appearance."}, {"start": 132.82, "end": 136.98, "text": " And as you see on the right, remarkably, this is exactly the case."}, {"start": 136.98, "end": 137.98, "text": " Checkmark"}, {"start": 137.98, "end": 143.45999999999998, "text": " This makes us one step closer to learning algorithms that can see the world around us, interpret"}, {"start": 143.45999999999998, "end": 146.98, "text": " it, and make proper decisions to carry out the plan."}, {"start": 146.98, "end": 174.42, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]
Two Minute Papers
https://www.youtube.com/watch?v=UkWnExEFADI
Neural Material Synthesis, This Time On Steroids
The paper "Single-Image SVBRDF Capture with a Rendering-Aware Deep Network" is available here: https://team.inria.fr/graphdeco/fr/projects/deep-materials/ Recommended for you - Neural Material Synthesis: https://www.youtube.com/watch?v=XpwW3glj2T8 Pick up cool perks on our Patreon page: https://www.patreon.com/TwoMinutePapers We would like to thank our generous Patreon supporters who make Two Minute Papers possible: 313V, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Emmanuel, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, John De Witt, Kjartan Olason, Lorin Atzberger, Marten Rauschenberg, Michael Albrecht, Michael Jensen, Milan Lajtoš, Morten Punnerud Engelstad, Nader Shakerin, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga. https://www.patreon.com/TwoMinutePapers Crypto and PayPal links are available below. Thank you very much for your generous support! › PayPal: https://www.paypal.me/TwoMinutePapers › Bitcoin: 13hhmJnLEzwXgmgJN7RB6bWVdT7WkrFAHh › Ethereum: 0x002BB163DfE89B7aD0712846F1a1E53ba6136b5A › LTC: LM8AUh5bGcNgzq6HaV1jeaJrFvmKxxgiXg Splash screen/thumbnail design: Felícia Fehér - http://felicia.hu Károly Zsolnai-Fehér's links: Facebook: https://www.facebook.com/TwoMinutePapers/ Twitter: https://twitter.com/karoly_zsolnai Web: https://cg.tuwien.ac.at/~zsolnai/
Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir. With this technique, we can take a photograph of a desired material and use a neural network to create a digital material model that matches it that we can use in computer games and animation movies. We can import real-world materials in our virtual worlds, if you will. Typically, to do this, an earlier work requires two photographs, one with flash and one without to get enough information about the reflectance properties of the material. Then, a follow-up AI paper was able to do this from only one image. It doesn't even need to turn the camera around the material to see how it handles reflections, but can learn all of these material properties from only one image. Isn't that miraculous? We talked about this work in more detail in two-minute papers, Episode 88. That was about two years ago. I put a link to it in the video description. Let's look at some results with this new technique. Here, you see the photos of the input materials and on the right, the reconstructed material. Please note that this reconstruction means that the neural network predicts the physical properties of the material, which are then passed to a light simulation program. So on the left, you see reality and on the right, the prediction plus simulation results under a moving point light. It works like magic. Love it. As you see in the comparisons here, it produces results that are closer to the ground truth than previous techniques. This method is designed in a way that enables us to create a larger training set for more accurate results. As you know, with learning algorithms, we are always looking for more and more training data. Also, it uses two neural networks instead of one, where one of them looks at local, nearby features in the input and the other one runs in parallel and ensures that the material that is created is also globally correct. Note that there are some highly scattering materials that this method doesn't support, for instance, fabrics or human skin. But since producing these materials in a digital world takes quite a bit of time and expertise, this will be a godsend for the video games and animation movies of the future. Thanks for watching and for your generous support, and I'll see you next time.
[{"start": 0.0, "end": 4.46, "text": " Dear Fellow Scholars, this is two-minute papers with Karo Zsolnai-Fehir."}, {"start": 4.46, "end": 10.14, "text": " With this technique, we can take a photograph of a desired material and use a neural network"}, {"start": 10.14, "end": 16.06, "text": " to create a digital material model that matches it that we can use in computer games and animation"}, {"start": 16.06, "end": 17.06, "text": " movies."}, {"start": 17.06, "end": 21.26, "text": " We can import real-world materials in our virtual worlds, if you will."}, {"start": 21.26, "end": 27.46, "text": " Typically, to do this, an earlier work requires two photographs, one with flash and one without"}, {"start": 27.46, "end": 31.94, "text": " to get enough information about the reflectance properties of the material."}, {"start": 31.94, "end": 37.82, "text": " Then, a follow-up AI paper was able to do this from only one image."}, {"start": 37.82, "end": 43.66, "text": " It doesn't even need to turn the camera around the material to see how it handles reflections,"}, {"start": 43.66, "end": 48.74, "text": " but can learn all of these material properties from only one image."}, {"start": 48.74, "end": 50.46, "text": " Isn't that miraculous?"}, {"start": 50.46, "end": 55.3, "text": " We talked about this work in more detail in two-minute papers, Episode 88."}, {"start": 55.3, "end": 57.18, "text": " That was about two years ago."}, {"start": 57.18, "end": 59.74, "text": " I put a link to it in the video description."}, {"start": 59.74, "end": 62.34, "text": " Let's look at some results with this new technique."}, {"start": 62.34, "end": 68.1, "text": " Here, you see the photos of the input materials and on the right, the reconstructed material."}, {"start": 68.1, "end": 72.82, "text": " Please note that this reconstruction means that the neural network predicts the physical"}, {"start": 72.82, "end": 78.22, "text": " properties of the material, which are then passed to a light simulation program."}, {"start": 78.22, "end": 83.9, "text": " So on the left, you see reality and on the right, the prediction plus simulation results"}, {"start": 83.9, "end": 86.02, "text": " under a moving point light."}, {"start": 86.02, "end": 87.66, "text": " It works like magic."}, {"start": 87.66, "end": 88.86, "text": " Love it."}, {"start": 88.86, "end": 93.58, "text": " As you see in the comparisons here, it produces results that are closer to the ground truth"}, {"start": 93.58, "end": 95.34, "text": " than previous techniques."}, {"start": 95.34, "end": 100.5, "text": " This method is designed in a way that enables us to create a larger training set for more"}, {"start": 100.5, "end": 101.97999999999999, "text": " accurate results."}, {"start": 101.97999999999999, "end": 106.34, "text": " As you know, with learning algorithms, we are always looking for more and more training"}, {"start": 106.34, "end": 107.34, "text": " data."}, {"start": 107.34, "end": 112.3, "text": " Also, it uses two neural networks instead of one, where one of them looks at local, nearby"}, {"start": 112.3, "end": 117.94, "text": " features in the input and the other one runs in parallel and ensures that the material"}, {"start": 117.94, "end": 121.25999999999999, "text": " that is created is also globally correct."}, {"start": 121.25999999999999, "end": 125.62, "text": " Note that there are some highly scattering materials that this method doesn't support,"}, {"start": 125.62, "end": 128.22, "text": " for instance, fabrics or human skin."}, {"start": 128.22, "end": 133.74, "text": " But since producing these materials in a digital world takes quite a bit of time and expertise,"}, {"start": 133.74, "end": 138.14, "text": " this will be a godsend for the video games and animation movies of the future."}, {"start": 138.14, "end": 141.94, "text": " Thanks for watching and for your generous support, and I'll see you next time."}]